While so much in the threat landscape is changing, why have our Incident Response processes have stayed the same? They have not adapted to the latest threats, which can put organizations at risk. This talk will present conclusions from doctoral research on the implementation of double-loop learning as a way to improve the incident response process. The presentation discusses the success of incorporating learning loops into IR cycles. Incident response is traditionally taught as a single loop learning cycle, but research shows double-loop learning can help limit or mitigate the extent of new issues and includes constant learning at each phase.
As technology advances, the health care critical infrastructure sector comprises much of the potential attack surface of the national security landscape. Medical devices are being fitted with “smart” technology in order to better serve patients and stay at the forefront of health technology. However, medical devices that enable connectivity, like all other computer systems, incorporate software that is vulnerable to threats.
Medical device recalls increased 126% in the first quarter of 2018, mostly due to software issues and vulnerabilities. Abbott and Bayer, among other medical device companies, had recalls on devices based on weaknesses discovered by both government security entities and academic institutions. These devices, which included pacemakers, infusion pumps, and MRI machines, were found to have vulnerabilities ranging from buffer overflow bugs to the presence of hard-coded credentials that easily lent to unauthorized access of proprietary information.
A breach of any one of these devices could compromise data confidentiality, integrity, and availability, as well as patient safety. In order to mitigate these types of vulnerabilities, the FDA has issued a guidance, as well as a vulnerability scoring system, in order to assess impact. This system assesses the attack vector, the complexity, risk and severity of both patient harm and information compromise, and the remediation level. By utilizing a more rigid system along these guidelines, there is hope that the threat of a medical device attack will be diminished.
This talk will explore some of the past and current vulnerabilities facing the medical device industry, and the steps that the FDA is taking to mitigate these risks.
The “ether sheet”, also known as the “blood/brain barrier” is a sheet that covers the face of a patient in surgery. While it is practical for sterilization, it also helps facilitate a surgeon to make the cut. While all doctors make an oath to “do no harm,” for some, they need to do some harm before thorough healing can happen. Similarly, for social engineers we need to do some harm in order for security awareness to advance. However, there is no defined blood/brain barrier for social engineering. Without one, social engineers are vulnerable to feelings of guilt and remorse even though they are working for the greater good. Those feelings can prevent a good social engineer from being a great social engineer. This research explores how to build one’s own social engineering blood/brain barrier so that social engineers can protect themselves in their efforts to better protect others.
In this completely hands-on workshop, you will get to understand the techniques and methodologies that could be applied when performing a web application penetration testing. Throughout this workshop, you will use the Burp Suite tool, which is a conglomerate of distinct tools with powerful features. Apart from gaining familiarity with the tools and the techniques involved in application security testing, you will also get an opportunity to understand some of the common vulnerabilities from the OWASP Top 10 list. I will provide you with a vulnerable website, and you will uncover security issues in it even if you have never done this before!
It will have a pinch of Agile Methodology, DevSecOps and CI/CD pipeline.
When I was 32, I lived on the beach, thrived in my made-for-me role inspecting large-scale construction sites, and enjoyed a mix of being highly social and deeply reflective. Then I had a Brainstem Stroke, four Cerebellar strokes, and massive concussions from external trauma/being strangled. Ability to move, communicate, comprehend, or even unconsciously run homeostasis was gone. Just like that. So, what do you do when every sense has been wiped of prior filters, every known ability glitches or has been removed, and only you can climb your way out? Hear how I analytically devised an ingenious way to format a wholly new internal working platform, one based on rational steps, exploiting my body’s electrical system to remove the conscious/subconscious barrier, electrical impulses, ratios and leaning on “hacking” ways around what was no longer functioning. This started with the only things I could control: breath and my tongue against my teeth. I held faith that any logical networking would provide a workable outcome. What transpired turned into something that approximately 1% of 1% do: survive, and then thrive, beyond damage to the deepest, most challenging CPU known to man. Doing this, I made 45 years of recovery in 5. An analog reality of “engineering out of the problem” that inspires joy, courage, humanity.
The term “Threat Hunter” and “Threat Researcher” seem to be buzzing around these days. But what does it mean? What do those people do? Where do they fall in? I’m going to tell you my thoughts on what the skills and abilities of seasoned hunter might look like and how having one could help an organization. I will talk a little about the common types of hunters, but I will talk more about my passion that is also my job. My goal is to help clear a couple of things up and maybe spark some interest in this field that I love.
[Speaker Name] has used online dating sites such as Tinder and OkCupid. At times this seems antithetical to his stance on privacy and security. To better understand the security ramifications of online dating, and to establish safer methods of doing it, he applied threat modeling to online dating. Through this he came up with a set of best practices depending on your threat model. This talk is relevant for anyone who is trying to balance privacy/security and a desire for human connection in this modern world. Due to the real and perceived dangers of online dating, the stigma that surrounds it, and the pervasiveness of it, it is a great lens through which folks can be introduced to the core principles of threat modeling. It also makes it fun to talk about!
While cyberattacks and cybersecurity may involve technical aspects, the offenders, defenders, and victims are human. Cybercrime is a broader societal problem and thus the social sciences can, and should, be included in discussions of cyberattacks and cybersecurity in both the research and education domains. This talk shares how social science has been used in research to understand adversarial movement, decision-making, adaptation, and group dynamics. It then discusses how it has been blended with data analytic techniques to delve into these areas further. On the education front, this talk shares how hands-on projects can expose students across both the hard and social sciences to areas of critical infrastructure cybersecurity, adversarial mindsets, and social engineering.
One of the first and most important lessons the risk assessor learns is that the human is the weakest link. While computers will never stray from their algorithms, the average user is naive and forgetful – making them susceptible to being socially engineered into disclosing sensitive information.
The average user doesn’t see themself as a piece of the security puzzle – they believe that their data is sufficiently protected by immature frameworks, ethical corporations, and academia or affairs that are “too complicated” for them to participate in. As infosec professionals we know this is far from the truth – the compromise of an entire infrastructure can be owed to even the smallest human error. Yet our average user is unaware of the risk they carry, leading them to believe that anything they are allowed to know or do is inherently secure rather than simply being convenient for business operations.
Hackers are Scary recounts how a quest to make privacy and security a priority in the healthcare sector reveals this gap between theoretical responsibility and actual practice. Through the trials and tribulations in making health record fraud and medical device vulnerabilities an approachable topic, one thing becomes clear: that the average user is actually quite concerned, but does not know how to participate. At the same time, industry culture has a tendency to pass off this this gap of knowledge as the user’s laziness or irresponsiblity. Only by creating an approachable dialogue with which the non-security savvy can interface with will they be able to learn how they fit into the bigger picture – both technically and culturally – and why it is necessary for them to take agency over the protection of data entrusted to them.
How can QA and Information Security be allies?
In the QA community, it’s been acknowledged for years that we want the same thing as Development and Product: good software, delivered on time, to reach our company’s goals. Information Security has been less well represented. This is the story of how we started talking across the gap, improved our requirements, and spread the practice of security testing across a department.