Updated: Jan 7, 2019
By Bill Hughes, CEO, CyberHabits
October 12, 2018
I love to swim -- especially snorkel. In the tropics, the difference between having your head above the water and below is the difference of two worlds. The world below the surface is teeming with life and activity and mysteries... and threats. Sharks, moray eels, barracudas, jelly fish, stingrays. Coral that can gracefully scrape and leave a deep wound. And then, there’s always running out of air... As much as possible without a scuba tank, I am an underwater adventurer. I routinely push my way down about 30 feet below the surface, so that I can swim with the fish in their environs. While I always dive with others (usually a chartered boat), I can sometimes find myself quickly going 50 or 100 feet from others. Despite the dangers, I would never want to give up swimming in the sea. It is deeply a part of who I am and what I do. So I have strategies that keep me safe -- snorkeling habits, ranging from where I swim, who I swim with and how far I allow myself to drive from the boat. Within those parameters, I can stay safe. For the past few years, I have been learning to swim in different waters. In the digital ocean, I have put my head below the surface, cleared my lenses and seen a whole new world -- one teeming with threats and attacks and vulnerabilities, and armies of smart, collaborative, concerned and committed people stemming the onslaught. It was like the first time I snorkeled in Hawaii. I knew there were fish down there, but I had no idea how much was going on below the surface!
Junk in the Water
What I love about swimming in the ocean is that it seems to be an infinite and pure ecosystem. It breaks my heart when I hear about islands of trash or toxic spills. What I have also come to terms with is how polluted the internet is when it comes to security. This is something that has gnawed at me since the rise of the dot com era: when the pursuit of financial gain pushed careful engineering to the back seat, it was only a matter of time before issues like security would raise their ugly head. Combine that with the internet’s empowerment of non-state bad actors, and you have a toxic brew. It has been confounding to see the way issues like security, privacy, data ownership and verification of authenticity have been systematically deprioritized in order to accelerate the growth of the internet.
Swimming Lessons from Wikipedia
In the book Open Sources 2.0, Larry Sanger, the co-founder of Wikipedia, describes how the original project (called Nupedia) was a digital encyclopedia written and peer-reviewed by PhDs and academics, seeking quality at least as good as paragons like Encyclopedia Britannica. He recounts how Jimmy Wales, in an effort to accelerate the growth of Wikipedia, removed the requirement for peer reviewers to be highly credentialed. What happened when he let everyone post articles to Wikipedia? A few things, some good, some not. Happily, the growth in usage and adoption of Wikipedia exploded. It became a place where topics that were emerging much faster or smaller than the traditional encyclopedia could handle might find oxygen. How many people and projects and companies and inventions have Wikipedia pages that would never have otherwise seen the light of day. In many ways, it is one of the great wonders of the digital age. Of course, this proliferation of content led to spurious content, sometimes outright fictitious, being posted. An invisible and loosely federated army emerged to keep Wikipedia clean -- or as clean as possible. However, their efforts were not enough to convince educators that Wikipedia was useful as a reliable source for research (15 years later, this is still true). Because they let open the flood gates to accelerate growth, Wikipedia realized it had to institute more and more quality indicators and mechanisms. One immediately comes to mind: the ability to identify a page as reliable or not. (Here is a link to a whole book on the quality of Wikipedia articles).
Now, apparently they have move back closer to the original Nupedia model: much tighter quality control of content -- at least through the concept of verified content (and credentialed verifiers having more weight in the system than unknown contributors). This clean-up effort has been costly. Fortunately for all of us it is underway. And it provides us a model for thinking about cybersecurity. The model rests upon a few solid engineering design principles:
Design for security. If you want a secure system, design with security in mind. Wikipedia, because it was based on the Nupedia model, was designed to be a high-quality system. The fact that the content quality bar was lowered to fuel growth meant that there were more toxins to be removed. But at least the system was designed for quality at the outset. I’m sure Jimmy Wales was able to make the decision to widen the flood gates precisely because there was a way to clean things up (maybe they got messier faster than he thought they would). But there has been a path back to quality for Wikipedia, and that path was a secure design.
Verification. The quality of Wikipedia content is ensured because of verifiable contributors and editors, as well as verifiable content. That verification is both user-empowered (e.g., checking sources) and provided by Wikipedia in how it shows which pages are verified and to what extent. The ability to quickly spot suspicious content is vital to Wikipedia’s quality story. They have built those indicators into their product, and to the degree they continue to pursue quality, they will have to stay vigilant to ensure these indicators are valid and useful.
Holistic view. Wikipedia understands its whole ecosystem: content creators, reviewers and readers of many types and motivations. Its system to ensure quality content reflects all of the aspects of that ecosystem and uses them all to drive the quality point. For example, it knows when content has been submitted, changed and read. It can use all of this information to prioritize where it spends its resources checking and improving quality.
The Digital Superfund Site
I have seen the internet built from its earliest days. My career, which began in 1981 as an Electrical Engineering student and a Bell Labs intern, is rooted in software and IT. As a UNIX programmer in my first job, I familiarized myself early with the DNA of what has become the internet. UNIX (the progenitor of most computer operating systems now, including Linux, MacOS and Android) was the first system I ever learned, and it was designed with security in mind. We were like Larry Sanger and Nupedia, in part because UNIX and networked computing (the foundations of the internet) were vying for legitimacy up against mainframe systems that looked at us the way a tank might look at a tricycle. In the 1990s as the internet took off, I was amazed at how many cautions were thrown to the wind in the chase for building this new industry. The hype cycle, fueled by venture capital investors, pursued one thing -- eyeballs. Getting more users was the only thing that mattered, and getting them at any cost was key. Despite the Dot Com Crash in 2001, the industry continued undeterred, having given birth to Google and Amazon. I’ll never forget the first time I saw Twitter and Facebook. I was at a multi-national educational publisher as a product manager, and our goal was to connect better with students. Many of us thought, we don’t quite get it, but what better ways than social media? I saw every business on the planet shifting to social media for similar reasons. Little concern was expressed about security other than not violating existing legal agreements. And many new agreements would be written with much more liberal policies: make way for innovation! Now, three decades later, we have migrated the entire global economy onto the internet, not taking into consideration that, from a security point of view, it is a digital superfund site. The levels of clean-up required to make it safe again are enormous. That’s why I started CyberHabits.
Joining the Clean-up Effort
In the process of launching a cybersecurity company, I had thought I would focus on minting more cybersecurity professionals. After all there’s a skills gap that will grow to over 2 million globally in a couple of years. Yet as I thought about the size of the gap and the rate at which we are producing cyber professionals, I realized that it was like the trying to catch an accelerating train -- we would never get there. And because of that, we would need to find different solutions. I turned to friends and associates who were cybersecurity experts, and in our conversations, four observations emerged that have set my course through the cyber seas. First, there is a lot of investment capital directed towards to automating the detection and prevention of cyber attacks. That said, I have also discovered that most innovations are focused on isolated problems (e.g., intelligent bots that identify and fix out-of-date software), and security is a holistic problem. Second, the majority of cyberattacks are the result of human error, meaning the volume of hacks can be reduced by as much as 77% (other things being equal -- according to Verizon’s popular study) by eliminating human error. Might that cut the down the talent gap of cyber pros to a more reasonable level? Third, the biggest bang for the buck in improving cybersecurity for most companies is training and awareness for employees, yet many companies have inadequate or no training in that regard. An ancient saying states “my people are destroyed for lack of knowledge,” and it seems like that is happening with companies in to a significant degree. For example, smaller businesses, which often don’t have ample security resources, are under increasing risk of cyber attack, and 60% of those who are bitten go bankrupt within six months. Yikes! Fourth, there is no silver bullet. A friend of mine who runs a chain of retail franchises declared to me after hearing I was working on cybersecurity, “What I need is a cyber FORTRESS!” He was expressing to me his desire to make a single decision that would move his businesses firmly into a manageable cybersecurity posture. He didn’t care whether the problem was email or GDPR or espionage or hackers from North Korea. He wants to know that his business is secure, and he wants to know how to move the problem off of his radar.
A Strategy for Clean Waters
When he shared this fantastical idea with me, I laughed: “Sure, wouldn’t we all.” I was convinced that he was asking for the impossible. But the thought stayed with me. Hypothetically, if you could have a “cyber fortress,” how would you do it, and what would it look like? While not perfect by any means, the iPhone comes to mind. Relative to other mobile devices (hello, Android), it is highly secure. Why? Because security was designed in as a feature from the start. Not just the hardware or the software, but the whole ecosystem. (This is why Apple regulates the apps that can be downloaded through the App Store. Like Wikipedia, Apple's approach to the App Store gives them the capability to secure the ecosystem of apps. How they choose to govern that ecosystem greatly impacts the security level of the iPhone.) What I also knew was, it was not enough to design a highly secure technology, even like the iPhone. Users need to know how to use it in a secure manner -- good cyber habits, as it were. Not only the apps, but how you configure and use them, and how you handle losing a device, complement the secure technology design of the iPhone. This made me wonder whether we could do this with business computing environments. This wondering led to the birth of the concept of CyberHabitats and CyberHabits. The CyberHabitat is the concept of a securable configuration. While cybersecurity standards exist and are widely used, they are not prescriptive enough for companies that just want to be safe. While I am more adventuresome in my snorkeling, my wife is quite the opposite. “I can see just fine from the surface!” she’d tell me as I caught my breath rising from a 10-meter skin dive. She didn’t need to add the risks that I took. In the same way, there are many business that have learned to live with technology “out of the box.” However, rarely do they learn what they need to do to ensure they use that technology in a secure way. How many of us who use Google’s G Suite have a high level of confidence that our security settings are what they should be relative to the business risk we want to take on? Moreover, once we do configure our systems to be secure, how effective is our training to ensure that employees (or partners, for that matter) know how to engage with that technology to avoid undermining our security concept. You can put a triple bolt on your home’s front door, but if your kids tell the delivery person that the key is under the doormat,...
Restoring Security -- by Design
CyberHabits defines the way people interact with technology. Our three-part model ties together platforms (technologies and how they are used), policies (and related processes) and people (what they know and what they do). By integrating these three aspects, a business can have a holistic understanding of its security profile. Moreover, by standardizing the concept of the CyberHabitat to have a known security state (not impenetrable, but known), security risks can be understood and managed in a way that is much more scalable than what most small and mid-sized organizations can do today (larger organizations often do this bespoke, and the very best ones employ a similar concept).
Our world is now a digital one at almost every turn, and because of that cybersecurity is and will be a major business concern for the foreseeable future. A more holistic view of cybersecurity is essential now, for three reasons. First, it is the only way to achieve the outcomes of authentic security that businesses are starting to demand. Hacking is not simply an annoyance or a juvenile prank -- it has proven itself to be an existential threat, and like dealing with pollution you face not knowing how to deal with the toxicity at your own peril. Second, that focus on outcomes will force us to think about the holistic system that is at the root of cybersecurity: the human-technology interface. The nature of cyber attacks is that bad actors are seeking to insert themselves into that holistic system, and often they see an organization’s people as the softest way to penetrate that system. They don't even need expensive tools to achieve such penetration. The simple skills of a con man are all that are necessary. Third, now that cybersecurity has become a board-level issue, it needs to be measured and managed. Gone are the days with the IT manager fought fires the best way he or she knew how -- maybe necessary, but now not sufficient. Businesses will soon expect dashboards that communicate to executives what the situation is, what actions need to be taken as a result, what are the options, and how to weigh those options given constraints like time, money, people and other limited resources.
Learning is the Cybersecurity Antidote
As someone who has spent the last two decades in ed tech (educational technology), I have discovered that education is “programming for people.” Because cybersecurity is a holistic endeavor -- people, policies and platforms -- I have come to understand that there is a lot to learn in applying the best practices in learning science -- from personalization to assessment to experience design -- to the massive challenge of cybersecurity.
Learning for People. The current state of affairs is cybersecurity awareness training, which has started to include simulated phishing attacks (emails that are meant to trick you into giving up access to systems or valuable information). This needs to go much farther, and it needs to measure learning, skills and -- more importantly -- behaviors. It is the behaviors people exhibit that make a holistic business system secure -- or not.
Learning for Policies. Policies themselves are behavioral prescriptions for an organization. These policies are the heart of cybersecurity audits and compliance initiatives like those spawned by GDPR (the European Unions data privacy regulations that impose stiff penalties for non-compliance). These policies are not just CYA material; with cybersecurity, they represent the necessary behaviors to protect the enterprise from active threats. This suggests that policies and training be intimately tied together.
Learning for Platforms. Our technology platforms often come with security features and guidelines for how to get the most out of them. Yet rarely are they dogmatic about it. I asked a friend who is running a start-up, “You use G Suite to run your business, and you are counting on Google to provide secure software. But do you know whether your configuration of their software is secure? Are your people using it as intended to maximize that security? These kinds of question suggest that learning, such as cybersecurity awareness, should not happen in a vacuum. Rather, it should be tied to the platforms that are at the heart of the business.
Looking back on my career, I realized that I had spent time focused on each of these areas. I started out designing and building technology platforms. I spent several years re-engineering businesses and establishing policies and processes that achieved desired outcomes. And for the past two decades, I have been immersed in finding, creating and deploying best-practice solutions for educating people. Out of these three streams, CyberHabits was born. I am excited about this next adventure. Dive in with us!
(c) 2018 CyberHabits LLC. All Rights Reserved