Daily Tech Digest - February 11, 2020

Cybersecurity in 2020: From secure code to defense in depth

IDG Tech Spotlight  >  Security  >  Cybersecurity in 2020: From secure code to defense in depth
Pity the poor CSO in the hotseat. Understandably, some feel compelled to jump on every new threat with a point solution, which plays right into the security software industry’s marketing strategy. But no organization’s cybersecurity budget is infinite. How can CSOs possibly determine how to allocate their defensive resources most effectively? The simple answer is twofold: Rationally prioritize risk and, at the same time, make the most of the useful defenses you already have in place. Few dispute that unpatched software and social engineering (including phishing) represent the highest risk in most organizations, followed by password cracking and software misconfiguration. Cut through political and operational barriers to ensuring prompt patching, establish an effective security awareness program, train your ops folks to lock down configurations, and put two-factor authentication in place…and you’ll reduce your overall risk by a magnitude. Sure, anyone can reel off other big risks and vulnerabilities. If you’re operating an electric utility, for example, you need to understand highly targeted threats to critical infrastructure and how to defend against them.

Developers Can Now Get Their Google Glass Enterprise 2

When Google launched Google Glass to both consumers and developers in 2013, it raised a number of harsh critiques, mostly due to it opening a large can of privacy concerns. This led the company to discontinue the product in 2015, only to relaunch it in 2017 with a focus on specialized enterprise applications. Since then, as Google explains, Google Glass Enterprise Edition 2 has seen adoption in logistics, manufacturing, healthcare, and other industries where the availability of an AR display projecting useful information while leaving the hands free to carry through one's duty is key. This state of things is echoed by Facebook Reality Labs lead Michael Abrash, who recently stated that mass AR adoption through devices such as Google Glass will take still five to ten years. Abrash identifies a number of technical hurdles that need to be overcome before glass-based AR technology can become successful in the consumer arena, with the top one being user interaction: There is no way that the way we’re going to interact with AR is going to be the way that we interact with our devices today. You’re not going to take out your phone every time you want to do something.

Biden comments reignite debate over Section 230 rule protecting online platforms

Biden and other Democratic Senators have expressed concerns about the increase in hate speech and the flood of unchecked disinformation making their way onto these digital platforms, while Republicans want tech companies to be restricted from moderating any speech for fear that it would curb conservative content. Each side has put forth a number of proposals but none have gained any traction, and while there may be minor changes to the rule in the future concerning specific topics like sex trafficking, it is more likely Section 230 will be here to stay. One of the most contentious aspects of the debate over the rule concerns corporations and the differing business reasons companies either want Section 230 removed or want it reinforced.  "The fight being put up by large, established, and long venerable companies like Disney, Marriott, and IBM to deflate Section 230 and remove or at least significantly diminish the protections it provides is quite multifaceted and driven by each company's individual motives," Tomaschek said. "Ultimately, however, what their individual grievances against 230 all seem to show is that the fight is essentially between old, hulking companies that have failed to adapt to the rapidly changing landscape and relatively new-on-the-scene Big Tech giants that were able to offer innovative services that consumers were quick and eager to adopt."

Cybersecurity's Perception Problem

Zero trust is based on static, concrete barriers that both disrupt operations and fail to actually stop any level of intelligent compromise. The core of zero trust is to set up barriers between otherwise connected systems in order to provide protection. For operations, employees go from connected control to being forced to log into multiple locations to get their jobs done. From a security perspective, once inside a walled garden, everything is trusted. Thus, our average Joe has a great conversation with the guard, walks into the bank and robs it — and nobody is ever alerted. Intelligent trust, on the other hand, interviews our buddy Joe and then lets him know what he can and cannot do. The second Joe tries to do something bad, he is stopped before he can actually perpetrate any crime. Many people in the zero trust world are looking to increase the dynamic components of zero trust frameworks through options such as microsegmentation, but again, perimeter protection just does not work. In order to effectively monitor the behavior of an enterprise, that enterprise has to be broken down into its fundamental behaviors at the level of each device.

Resilience is a skill that’s just as important as tech know-how

Even the need to learn new skills might be challenging, in that it will require people to agree to and participate in the training process, adding to the cognitive load that is already part of their job. Then, once workers have been retrained, they will face a new environment. They could be employed in different roles with no clear career trajectory. Those who move from being employed to being contingent workers may have to manage their own long-term goals. And, most daunting of all, no one knows how long this period of economic transition will last. We don’t know whether employees who make one transition will be “done,” or whether retraining and role changes will remain a continuous process. ... Governments also have a role to play in elevating resilience and could choose to incentivize or mandate action by employers. They can also prepare the next generation of workers by building these resilience skills into school curricula at all levels. Success will inevitably require a combination of all stakeholders — business, government, and individuals — driving change.

The 25 most impersonated brands in phishing attacks

brand phishing attacks
Microsoft remained the primary corporate target in Q4, coming in at #3 on this quarter’s Phishers’ Favorites list. With 200 million active business users and counting, Office 365 continues to be the primary driver for Microsoft phishing. Cybercriminals seek O365 credentials in order to access sensitive corporate information and use compromised accounts to launch targeted spear phishing attacks on other employees or partners. In Q4, large volumes of file-sharing phishing were still seen, including fake OneDrive/SharePoint notifications leading directly to a phishing page and legitimate notifications leading to files containing phishing URLs. There’s also the emergence of note phishing impersonating services like OneNote and Evernote. While the campaigns are similar, the key difference is that OneNote or Evernote notes are not files, but rather HTML pages. Thus, the same technology that is used by email security vendors to scan the contents of files doesn’t work with HTML pages, which means these emails have a higher likelihood of reaching users’ inboxes.

Who should lead the push for IoT security?

Internet of think with padlock showing security
“The challenge of this market is that it’s moving so fast that no regulation is going to be able to keep pace with the devices that are being connected,” said Forrester vice president and research director Merritt Maxim. “Regulations that are definitive are easy to enforce and helpful, but they’ll quickly become outdated.” The latest such effort by a governmental body is a proposed regulation in the U.K. that would impose three major mandates on IoT device manufacturers that would address key security concerns: device passwords would have to be unique, and resetting them to factory defaults would be prohibited; device makers would have to offer a public point of contact for the disclosure of vulnerabilities; and device makers would have to “explicitly state the minimum length of time for which the device will receive security updates” This proposal is patterned after a California law that took effect last month. Both sets of rules would likely have a global impact on the manufacture of IoT devices, even though they’re being imposed on limited jurisdictions. That’s because it’s expensive for device makers to create separate versions of their products.

Jenkins Creator Launches ML Startup in Continuous Risk-Based Testing

Launchable is currently inviting applications to join its public beta. According to the Launchable website, their solution can identify the surface of tests which provide sufficient confidence, based on the specific risks of changes made in the software.The site states that this is made possible by machine learning engine that predicts the likelihood of a failure for each test case given a change in the source code. This allows you to run only the meaningful subset of tests, in the order that minimizes the feedback delay. In his blog, Kawaguchi explained this further and wrote about a hypothetical scenario, where he asked the reader to consider a long running test suite. He proposed that the time to feedback could be greatly reduced if machine learning could be used to "choose the right 10% of the tests that give you 80% confidence." Ariola described successful continuous testing as an activity which is targeted at "business risk," rather than requirements verification alone. He provided examples of how increasing levels of business agility and automation allowed companies to create a range of "competitive differentiators" in their products.

Why the Fed is considering a cash-backed cryptocurrency

FinTech abstract / virtual world of dollars, pounds, euros, bitcoins, etc.
By creating a digital coin tied to the U.S. dollar and its owner through cryptographic hash keys, consumers and businesses alike would be able to track a token they own on an immutable electronic ledger, and possibly even retrieve it if an error is made after a transfer. In turn, government agencies could trace tokens, and ensure banks are complying with know-your-customer and anti-money laundering laws. “In the US…, you have a bank account and so much money according to bank's ledger. [You] can’t say that’s my dollar,” Kornfeld said. “I think maybe they’re looking now and saying that we’ve thought about it more and there are things we could do that may make sense and maybe we should formally tokenize U.S. currency. I think this is in the early stages.” More than 80% of central banks say they're engaged in some type of central bank digital currency (CBDC) effort, according to Bank for International Settlements survey of 66 central banks. “The latest survey suggests there is greater openness to issuing a CBDC than a year ago, and a few central banks report that they are moving forward with issuing a CBDC,” Brainard said.

AI in public service must be accountable

Bill Mitchell, director of policy at BCS, the Chartered Institute for IT, added: “There is a very old adage in computer science that sums up many of the concerns around AI-enabled public services: ‘Garbage in, garbage out.’ In other words, if you put poor, partial, flawed data into a computer, it will mindlessly follow its programming and output poor, partial, flawed computations. “AI is a statistical-inference technology that learns by example. This means that if we allow AI systems to learn from ‘garbage’ examples, we will end up with a statistical-inference model that is really good at producing ‘garbage’ inferences.” Mitchell said the report highlighted the importance of having diverse teams that would help to make public authorities more likely to identify any potential ethical pitfalls of an AI project. “Many contributors emphasised the importance of diversity, telling the committee that diverse teams would lead to more diverse thought, and that, in turn, this would help public authorities to identify any potential adverse impact of an AI system,” he said.

Quote for the day:

"We are reluctant to let go of the belief that if I am to care for something I must control it." -- Peter Block

No comments:

Post a Comment