Daily Tech Digest - February 11, 2020

Cybersecurity in 2020: From secure code to defense in depth

IDG Tech Spotlight  >  Security  >  Cybersecurity in 2020: From secure code to defense in depth
Pity the poor CSO in the hotseat. Understandably, some feel compelled to jump on every new threat with a point solution, which plays right into the security software industry’s marketing strategy. But no organization’s cybersecurity budget is infinite. How can CSOs possibly determine how to allocate their defensive resources most effectively? The simple answer is twofold: Rationally prioritize risk and, at the same time, make the most of the useful defenses you already have in place. Few dispute that unpatched software and social engineering (including phishing) represent the highest risk in most organizations, followed by password cracking and software misconfiguration. Cut through political and operational barriers to ensuring prompt patching, establish an effective security awareness program, train your ops folks to lock down configurations, and put two-factor authentication in place…and you’ll reduce your overall risk by a magnitude. Sure, anyone can reel off other big risks and vulnerabilities. If you’re operating an electric utility, for example, you need to understand highly targeted threats to critical infrastructure and how to defend against them.



Developers Can Now Get Their Google Glass Enterprise 2

When Google launched Google Glass to both consumers and developers in 2013, it raised a number of harsh critiques, mostly due to it opening a large can of privacy concerns. This led the company to discontinue the product in 2015, only to relaunch it in 2017 with a focus on specialized enterprise applications. Since then, as Google explains, Google Glass Enterprise Edition 2 has seen adoption in logistics, manufacturing, healthcare, and other industries where the availability of an AR display projecting useful information while leaving the hands free to carry through one's duty is key. This state of things is echoed by Facebook Reality Labs lead Michael Abrash, who recently stated that mass AR adoption through devices such as Google Glass will take still five to ten years. Abrash identifies a number of technical hurdles that need to be overcome before glass-based AR technology can become successful in the consumer arena, with the top one being user interaction: There is no way that the way we’re going to interact with AR is going to be the way that we interact with our devices today. You’re not going to take out your phone every time you want to do something.


Biden comments reignite debate over Section 230 rule protecting online platforms

facebook-logos-17.jpg
Biden and other Democratic Senators have expressed concerns about the increase in hate speech and the flood of unchecked disinformation making their way onto these digital platforms, while Republicans want tech companies to be restricted from moderating any speech for fear that it would curb conservative content. Each side has put forth a number of proposals but none have gained any traction, and while there may be minor changes to the rule in the future concerning specific topics like sex trafficking, it is more likely Section 230 will be here to stay. One of the most contentious aspects of the debate over the rule concerns corporations and the differing business reasons companies either want Section 230 removed or want it reinforced.  "The fight being put up by large, established, and long venerable companies like Disney, Marriott, and IBM to deflate Section 230 and remove or at least significantly diminish the protections it provides is quite multifaceted and driven by each company's individual motives," Tomaschek said. "Ultimately, however, what their individual grievances against 230 all seem to show is that the fight is essentially between old, hulking companies that have failed to adapt to the rapidly changing landscape and relatively new-on-the-scene Big Tech giants that were able to offer innovative services that consumers were quick and eager to adopt."


Cybersecurity's Perception Problem

uncaptioned
Zero trust is based on static, concrete barriers that both disrupt operations and fail to actually stop any level of intelligent compromise. The core of zero trust is to set up barriers between otherwise connected systems in order to provide protection. For operations, employees go from connected control to being forced to log into multiple locations to get their jobs done. From a security perspective, once inside a walled garden, everything is trusted. Thus, our average Joe has a great conversation with the guard, walks into the bank and robs it — and nobody is ever alerted. Intelligent trust, on the other hand, interviews our buddy Joe and then lets him know what he can and cannot do. The second Joe tries to do something bad, he is stopped before he can actually perpetrate any crime. Many people in the zero trust world are looking to increase the dynamic components of zero trust frameworks through options such as microsegmentation, but again, perimeter protection just does not work. In order to effectively monitor the behavior of an enterprise, that enterprise has to be broken down into its fundamental behaviors at the level of each device.


Resilience is a skill that’s just as important as tech know-how


Even the need to learn new skills might be challenging, in that it will require people to agree to and participate in the training process, adding to the cognitive load that is already part of their job. Then, once workers have been retrained, they will face a new environment. They could be employed in different roles with no clear career trajectory. Those who move from being employed to being contingent workers may have to manage their own long-term goals. And, most daunting of all, no one knows how long this period of economic transition will last. We don’t know whether employees who make one transition will be “done,” or whether retraining and role changes will remain a continuous process. ... Governments also have a role to play in elevating resilience and could choose to incentivize or mandate action by employers. They can also prepare the next generation of workers by building these resilience skills into school curricula at all levels. Success will inevitably require a combination of all stakeholders — business, government, and individuals — driving change.


The 25 most impersonated brands in phishing attacks

brand phishing attacks
Microsoft remained the primary corporate target in Q4, coming in at #3 on this quarter’s Phishers’ Favorites list. With 200 million active business users and counting, Office 365 continues to be the primary driver for Microsoft phishing. Cybercriminals seek O365 credentials in order to access sensitive corporate information and use compromised accounts to launch targeted spear phishing attacks on other employees or partners. In Q4, large volumes of file-sharing phishing were still seen, including fake OneDrive/SharePoint notifications leading directly to a phishing page and legitimate notifications leading to files containing phishing URLs. There’s also the emergence of note phishing impersonating services like OneNote and Evernote. While the campaigns are similar, the key difference is that OneNote or Evernote notes are not files, but rather HTML pages. Thus, the same technology that is used by email security vendors to scan the contents of files doesn’t work with HTML pages, which means these emails have a higher likelihood of reaching users’ inboxes.


Who should lead the push for IoT security?

Internet of think with padlock showing security
“The challenge of this market is that it’s moving so fast that no regulation is going to be able to keep pace with the devices that are being connected,” said Forrester vice president and research director Merritt Maxim. “Regulations that are definitive are easy to enforce and helpful, but they’ll quickly become outdated.” The latest such effort by a governmental body is a proposed regulation in the U.K. that would impose three major mandates on IoT device manufacturers that would address key security concerns: device passwords would have to be unique, and resetting them to factory defaults would be prohibited; device makers would have to offer a public point of contact for the disclosure of vulnerabilities; and device makers would have to “explicitly state the minimum length of time for which the device will receive security updates” This proposal is patterned after a California law that took effect last month. Both sets of rules would likely have a global impact on the manufacture of IoT devices, even though they’re being imposed on limited jurisdictions. That’s because it’s expensive for device makers to create separate versions of their products.


Jenkins Creator Launches ML Startup in Continuous Risk-Based Testing

Launchable is currently inviting applications to join its public beta. According to the Launchable website, their solution can identify the surface of tests which provide sufficient confidence, based on the specific risks of changes made in the software.The site states that this is made possible by machine learning engine that predicts the likelihood of a failure for each test case given a change in the source code. This allows you to run only the meaningful subset of tests, in the order that minimizes the feedback delay. In his blog, Kawaguchi explained this further and wrote about a hypothetical scenario, where he asked the reader to consider a long running test suite. He proposed that the time to feedback could be greatly reduced if machine learning could be used to "choose the right 10% of the tests that give you 80% confidence." Ariola described successful continuous testing as an activity which is targeted at "business risk," rather than requirements verification alone. He provided examples of how increasing levels of business agility and automation allowed companies to create a range of "competitive differentiators" in their products.


Why the Fed is considering a cash-backed cryptocurrency

FinTech abstract / virtual world of dollars, pounds, euros, bitcoins, etc.
By creating a digital coin tied to the U.S. dollar and its owner through cryptographic hash keys, consumers and businesses alike would be able to track a token they own on an immutable electronic ledger, and possibly even retrieve it if an error is made after a transfer. In turn, government agencies could trace tokens, and ensure banks are complying with know-your-customer and anti-money laundering laws. “In the US…, you have a bank account and so much money according to bank's ledger. [You] can’t say that’s my dollar,” Kornfeld said. “I think maybe they’re looking now and saying that we’ve thought about it more and there are things we could do that may make sense and maybe we should formally tokenize U.S. currency. I think this is in the early stages.” More than 80% of central banks say they're engaged in some type of central bank digital currency (CBDC) effort, according to Bank for International Settlements survey of 66 central banks. “The latest survey suggests there is greater openness to issuing a CBDC than a year ago, and a few central banks report that they are moving forward with issuing a CBDC,” Brainard said.


AI in public service must be accountable


Bill Mitchell, director of policy at BCS, the Chartered Institute for IT, added: “There is a very old adage in computer science that sums up many of the concerns around AI-enabled public services: ‘Garbage in, garbage out.’ In other words, if you put poor, partial, flawed data into a computer, it will mindlessly follow its programming and output poor, partial, flawed computations. “AI is a statistical-inference technology that learns by example. This means that if we allow AI systems to learn from ‘garbage’ examples, we will end up with a statistical-inference model that is really good at producing ‘garbage’ inferences.” Mitchell said the report highlighted the importance of having diverse teams that would help to make public authorities more likely to identify any potential ethical pitfalls of an AI project. “Many contributors emphasised the importance of diversity, telling the committee that diverse teams would lead to more diverse thought, and that, in turn, this would help public authorities to identify any potential adverse impact of an AI system,” he said.



Quote for the day:


"We are reluctant to let go of the belief that if I am to care for something I must control it." -- Peter Block


Daily Tech Digest - February 09, 2020

Ransomware attacks are now targeting industrial control systems


Researchers found Ekans contains a list of commands and processes associated with a number of industrial control system-specific functionalities aimed at stopping these functions in a ransomware attack. While this functionality is described as limited, researchers' analysis of Ekans notes that it still represents "a deeply concerning evolution in ICS-targeting malware" because it indicates that cyber criminals are now targeting ICS operation systems purely for financial gain. Files encrypted are renamed with a random five character file extension, while victims are presented with a ransom note with an email address to contact to negotiate a ransom to be paid in cryptocurrency. In order to deploy the ransomware, the attackers behind Ekans likely need to compromise the network before executing the attack. This follows the same trend as ransomware variants like Ryuk and Megacortex, which rely on a hands-on method of deployment rather than the self-propagation followed by other forms of ransomware. The way in which Ekans is designed to target ICS operations indicates that the attackers very much have a specific target in mind, so are likely to take their time to compromise targets relevant to their plans.



Security Predicted by Gartner to Improve in DevOps Teams

One tool on Gartner’s roadmap is IAST, or "Interactive Application Security Testing". IAST helps teams understand and address security during development and testing, in a manner similar to how Application Performance Management tools helped teams understand performance. Instead of sending code to a specialized performance team to evaluate isolated tests in a lab, APM tools such as New Relic, Dynatrace, and AppDynamics used instrumentation to continuously monitor what happened in an application without requiring code changes. As a result, teams could monitor their own data without requiring dedicated study in the field of performance engineering. With tools such as IAST, teams can leverage tools to find security defects without requiring dedicated study in security risk. As a result, these newer DevOps tools can locate security defects by identifying interesting occurrences, such as: when user input reaches an SQL command without validation, where an XML parser is configured to provide local files to external users, and many other types of risk.


Why private micro-networks could be the future of how we connect


The current social-media model isn’t quite right for family sharing. Different generations tend to congregate in different places: Facebook is Boomer paradise, Instagram appeals to Millennials, TikTok is GenZ central. Updating family about a vacation across platforms—via Instagram stories or on Facebook, for example—might not always be appropriate. Do you really want your cubicle pal, your acquaintance from book club, and your high school frenemy to be looped in as well? “Social media treats everyone—a friend, a family member, an acquaintance—the same,” says Courtney Walsh, a lecturer in human development and family sciences at the University of Texas who consulted for Cocoon. “I would argue that what we are doing is impersonal on social media.” Cocoon aims to change the way we share. It launched on Thanksgiving, with more than 10,000 users signing up from 163 countries that week, according to Monga. Everything you post stays within the group. The app is its own small world: a feed is the home screen, greeting users with updates since they last signed on; messaging capabilities include threads to help corral conversations.


Add a Turbocharged Data Grid to Your Angular Application

Developers of enterprise web applications that need to display data in grids are faced with a dilemma. Do you use plain HTML tables, which are simple and easy to build, but tired and featureless (especially since you know users will compare them to Google Sheets)? Or do you try again to find that elusive, free, open-source (but reliable) datagrid tool that will give you the performance, features, and support you need? Oh, and page bloat is always an issue... Enter Wijmo FlexGrid, a mature, fully supported data grid. It’s now over five years old and, at version 5, it’s feature-rich, incredibly fast, and has a tiny code footprint. FlexGrid features an Angular wrapper that simplifies the work required to create the Angular components you need, with support for all versions from Angular 2 through Angular 9. We will use Angular 8 in this tutorial. To see how easy it is to use Wijmo FlexGrid tables in an Angular 8 application, we’ll start with a simple Angular component containing a data-bound HTML table. Once the basic version works, we’ll alter the code and switch to Wijmo FlexGrid.


How to prioritize IT security projects

prioritize IT security projects
The state of IT security has never been more precarious. Advanced Persistent Threats (APTs), often state-sponsored, can embed themselves in a security environment, move laterally, and steal an organization’s critical assets without being detected for months. Cloud migration – and the challenges of handling on prem/cloud risks in an integrated manner – has created new attack paths while greatly increasing the demands placed on modern organizational security teams. These developments exacerbate the already tough mandate for IT security pros: they must be right every time, and the attackers need only be successful once. This doesn’t mean that hackers can operate with an entirely free hand; they, too, must pick and prioritize their spots. If your security is robust enough relative to other targets, attackers may judge it to be more trouble than it is worth, especially when there are so many other lightly guarded networks, devices, etc. Automation is the critical weapon in this game of attack and defend, as it allows attackers to maximize their resources and probe for the most vulnerable targets at scale. For defenders, automation plays an equally essential role.


Success follows Indiana Tech cybersecurity team

Katie Fyfe | The Journal Gazette
 Purdue University's Tyler Jones  competes in Saturday's Indiana Collegiate Cyber Defense Competition at Indiana Tech.
Darryl Togashi, director of Indiana Tech's cybersecurity program, was hired in June to beef up the university's offerings to increase lab space and align classes with current industry requirements. As cyber-threats evolve, professional training must adapt, he said. His vision includes creating more hands-on opportunities for students to gain practical experience. Togashi's focus includes securing a CAE-CDE – of Center of Academic Excellence in Cyber Defense Education – designation from the National Security Agency and the Department of Homeland Security. Results of the review should be available in March or April, he said. If Indiana Tech doesn't receive the prestigious designation, it will get feedback on what deficiencies were found and allowed to correct them and reapply, Togashi said. Within a year to two, university officials hope to offer working IT professionals opportunities to receive training in specific cybersecurity skills or topics. Voting machine security issues might be a focus of one, for example. Togashi is also charged with sponsoring and participating in events that introduce middle and high school students to cybersecurity at a level they can understand.


Google releases TyDi QA, a data set that aims to capture the uniqueness of languages


Whether we’re aware of it or not, AI and cybersecurity technology are nearly omnipresent in our daily lives, and they’re only gaining importance as our world becomes more connected, “intelligent,” and reliant on online or automated systems. Yet both can seem intractably technical, even to tech-savvy people, with an ominous gravity that multiplies at the intersection of their Venn diagrams. The easy metaphor is that cybersecurity is an escalating arms race between good actors and bad actors, while the advent of AI is proverbial nuclear warfare. Some of that’s true, but the reality is far more illuminating, nuanced, and accessible. Huge cybersecurity threats can be amplified by AI technologies, so cybersecurity experts need to employ AI to protect us — and they are. In this issue, we’ll discuss how some threats are more sophisticated than ever, but most are not. We’ll learn that even as attack and defense systems are supercharged by technology, the need for human expertise has become more critical, not less. And we’ll look at practical solutions to some of the most onerous threats, like deepfakes and the increasing presence of AI-powered cameras.


How Twitter Improves Resource Usage with a Deterministic Load Balancing Algorithm


Twitter has been using a client-side load balancing technique for several years with its microservices architecture. They call this technique a "deterministic aperture," and it's part of Finagle's RPC framework, an open-source project for the JVM. Finagle embeds a client-side load balancer in every client. Instead of making calls to a central server-side load balancer, all requests go straight to a destination server, without an intermediary. This reduces the need for an extra infrastructure layer, and also reduces network hops, bandwidth, and points of failure in the system. Client-side load balancing is an approach that other projects like Baker Street and Netflix Ribbon use. And also companies like Yelp, Airbnb, or Stripe use it to run microservices systems. Using client-side load balancers means that now there can be multiple load balancers distributed within clients throughout the system—at least one per client. Therefore, it gets complicated when trying to distribute traffic load to servers in an even manner, especially when there are thousands of servers. For this reason, Finagle's deterministic aperture algorithm combines the P2C approach for distributing traffic load with the combination of a deterministic approach when picking which servers to connect.


This crafty malware makes you retype your passwords so it can steal them


Once running on the compromised Windows system, Metamorfo terminates any running browsers and then prevents any new browser windows from using auto-complete and auto-suggest in data entry fields. This prevents the user from using auto-complete functions to enter usernames, passwords and other information, allowing the malware's keylogger functionality to collect the data the users are thus obliged to retype. It then sends that data back to a command-and-control server run by the attackers. Metamorfo even includes a function that monitors 32 keywords associated with the targeted banks, likely so that the attackers can be alerted in real time as to when a victim is trying to access online services. Researchers haven't revealed the keywords or the names of the financial institutions being targeted, as it's likely the Metamorfo campaign is still active. To help protect against falling victim to attacks using the malware, users should be wary of unexpected emails and attachments, while using an antivirus product can also help detect the malware.


How Enterprise Architecture can reduce risk in Mergers and Acquisitions


A merger involves two or more entities combining in such a way that previously recognized legal entities cease to exist. The merging entities have a shared burden in defining the new or changed business model; changes in their go-to-market strategy; customer journey, etc. This includes the complexity of the risks involved in getting the necessary clearance and support required in clearing a myriad of regulatory hurdles. All of this means risk and resource allocation decisions above and beyond the typical day to day business operations. Merging companies share an interest in articulating organizational structure and behavior from a strategy, operational, regulatory, and financial perspective. For Acquisition, the balance of sharing risk and resource allocation shifts depending on the nature of the purchase. In a friendly acquisition, it could look more like a merger in which parties are sharing some level of risk and resource decisions. Both participating organizations have an interest in representing their companies in the best possible light, but perhaps for different reasons. The acquiring firm wants to ensure they can demonstrate to the market and shareholders they are doing the right thing and will bring increased value.



Quote for the day:


"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls


Daily Tech Digest - February 08, 2020

Cultural Transformation Is Needed Before Traditional Banks Can Go Digital


Traditional banks and other financial institutions need to go through a cultural transformation before they can successfully undergo a digital transformation, according to the chief technology advocate at Starling Bank, whose statements came during a recent panel discussion. The recent debate, which was an initiative supported by widely-used messaging service Slack, was held on February 6 in London. It covered several 21st century developments including ongoing digital transformation efforts, the future of work and jobs, and how professionals will communicate and work cooperatively on projects. Slack’s panel included commentary from Stuart Templeton, head of Slack’s UK division, Jason Maude, chief technology advocate at Starling Bank, Flora Coleman, head of government relations at TransferWise, ... While speaking about recent digital transformation efforts, Maude noted that he thinks it might be possible for traditional financial institutions to adapt to change, however, it would definitely be challenging for them to do so.


Cybersecurity Priorities Are A Matter Of Perspective

A new report sponsored by Authentic8 reveals some concerning cybersecurity insights.
Cybersecurity and compliance are crucial for organizations—but they are also challenging. The threat landscape is constantly growing and shifting, making it difficult to stay one step ahead of attackers and adequately defend networks and data. A survey of cybersecurity professionals at Black Hat USA 2019 in August of last year found that 65% believe their organization will have to respond to at least one major cybersecurity breach in the next year. Addressing cyber threats and compliance mandates is even harder if the effort is not coordinated. The report from Real Time Research Reports, sponsored by Authentic8, examines survey results from 163 senior level compliance, legal, and IT managers from financial services companies or law firms with clients in the financial industry. Among the organizations that took part in the survey, it seems that these three teams each view both the issues of cybersecurity and compliance and how to address them through their own skewed lens. In other words, every problem is a “nail” that only their particular “hammer” can solve.


Ransomware installs Gigabyte driver to kill antivirus products

Gigabyte
Gigabyte's fault resides in its unprofessional manner in which it dealt with the vulnerability report for the affected driver. Instead of acknowledging the issue and releasing a patch, Gigabyte claimed its products were not affected. The company's downright refusal to recognize the vulnerability led the researchers who found the bug to publish public details about this bug, along with proof-of-concept code to reproduce the vulnerability. This public proof-of-concept code gave attackers a roadmap to exploiting the Gigabyte driver. When public pressure was put on the company to fix the driver, Gigabyte instead chose to discontinue it, rather than releasing a patch. But even if Gigabyte had released a patch, attackers could have simply used an older and still vulnerable version of the driver. In this case, the driver's signing certificate should have been revoked, so it wouldn't be possible to load the driver's older versions either. "Verisign, whose code signing mechanism was used to digitally sign the driver, has not revoked the signing certificate, so the Authenticode signature remains valid,"


A new money-laundering rule is forcing crypto exchanges to scramble

A person holding a smartphone with a cryptocurrency trading app open. A laptop sits on the desk in the background.
Critics have argued that the new rule is onerous because it calls on the industry to build a completely new technical infrastructure for sharing information. Because of the pseudonymous nature of cryptocurrency, it’s not necessarily obvious to exchanges, for instance, when a customer is sending money to another exchange. All they can see is a string of letters and numbers, so the sender could just as well be transferring money to another wallet the same person controls. Now exchanges will somehow have to identify themselves. Others have argued that the rule will drive would-be money launderers to use services and tools that are harder to police. Nonetheless, the industry has been left with no choice but to come up with something like the SWIFT network, but for crypto. And they’ve got to come up with something fast; FATF plans to review its progress in June. According to a new, detailed look inside the process by CoinDesk, thorny questions remain about how exactly exchanges should transmit information to each other. Should that process use a blockchain, or rely on a more traditional, centralized design?


White House reportedly aims to double AI research budget to $2B

Artificial Intelligence Learning
Simply doubling the budget isn’t a magic bullet to take the lead, if anyone can be said to have it, but deploying AI to new fields is not without cost and an increase in grants and other direct funding will almost certainly enable the technology to be applied more widely. Machine learning has proven to be useful for a huge variety of purposes and for many researchers and labs is a natural next step — but expertise and processing power cost money. It’s not clear how the funds would be disbursed; It’s possible existing programs like federal Small Business Innovation Research awards could be expanded with this topic in mind, or direct funding to research centers like the National Labs could be increased. Research into quantum computing and related fields is likewise costly. Google’s milestone last fall of achieving “quantum superiority,” or so the claim goes, is only the beginning for the science and neither the hardware nor software involved have much in the way of precedents.


Top Tech for Mobile Banking? GPS.

As the banking industry moves toward fully real-time solutions, with millisecond response times and immediate access to funds, precise time becomes vital. As banks continue to pursue modernization of batch-based, memo-post core solutions – which tended to use the server time for transactional timestamps – those date and time stamps, as well as location stamps, become the most important pieces of data the solutions maintain with a transaction. GPS is globally available, can be checked from anywhere, and is free for everyone to use – including banks and their mobile banking solutions. Cell towers use GPS signals to place phone activity; ATMs and cash registers use GPS data for transactions; and stock exchanges use GPS to regulate the trades that go into stock portfolios and investment funds. GPS technology not only identifies positioning; it can be used to ensure systems all over the world agree on the exact time. When you withdraw cash from an ATM or swipe your card or wearable at a store, the underlying systems need to determine (and agree upon) the exact time that the transaction occurs, for example, to prevent accounts from being overdrawn.


Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security

Cyber defense professionals leverage AI systems.
There has always been tension between the need for better security and the need for higher productivity. The most usable systems are not secure, and the most secure systems are often unusable. Striking the right balance between the two is vital, but achieving this balance is becoming more difficult as attack methods grow more aggressive. AI will likely come into your organization through the evolution of basic security practices. For instance, consider the standard security practice of authenticating employee and customer identities. As cybercriminals get better at spoofing users, stealing passwords and so on, organizations will be more incentivized to embrace advanced authentication technologies, such as AI-based facial recognition, gait recognition, voice recognition, keystroke dynamics and other biometrics. The 2019 Verizon Data Breach Investigations Report found that 81 percent of hacking-related breaches involved weak or stolen passwords. To counteract these attacks, sophisticated AI-based tools that enhance authentication can be leveraged.


Academics steal data from air-gapped systems using screen brightness variations


The method relies on making small tweaks to an LCD screen's brightness settings. The tweaks are imperceptible to the human eye, but can be detected and extracted from video feeds using algorithmical methods. This article describes this innovative new method of stealing data, but readers should be aware from the start that this attack is not something that regular users should worry about, and are highly unlikely to ever encounter it. Named BRIGHTNESS, the attack was designed for air-gapped setups -- where computers are kept on a separate network with no internet access. Air-gapped computers are often found in government systems that store top-secret documents or enterprise networks dedicated to storing non-public proprietary information. Creative hackers might find a way to infect these systems -- such as using an infected USB thumb drive that's plugged into these systems -- but getting data out of air-gapped networks is the harder part. This is where a team of academics at the Ben-Gurion University of the Negev in Israel have specialized themselves.


5 Mistakes Startups Make And How To Avoid Them

Mistakes for startups to avoid.
The most common mistake founders make when starting out is hiring people just like themselves. The danger in this unconscious bias, Weinstein says, is that without people who challenge the founder’s thinking, companies fail to identify new opportunities or to spot risks until it’s too late. The most successful startups hire people who bring a diversity of ideas, backgrounds and (complementary) skillsets, he says. Weinstein also cautions founders against hiring too many people too quickly. “Run lean,” he says. For early-stage technology startups, for example, there tend to be two critical roles: “one person who builds the product and the other who sells it.” ... Startups are notorious for overestimating the demand for their products and not knowing how to bring their big ideas to market. Both de Beco and Weinstein urge B2B startups in particular to document their “paths to profitability,” conducting rigorous analyses that challenge their initial market estimates. That analysis requires a deep understanding of the strengths and weaknesses of competitors and, above all, determining the company’s proprietary advantage in each market segment it’s looking to penetrate.


A new implant for blind people jacks directly into the brain


This was possible thanks to a modified pair of glasses, blacked out and fitted with a tiny camera. The contraption is hooked up to a computer that processes a live video feed, turning it into electronic signals. A cable suspended from the ceiling links the system to a port embedded in the back of Gómez’s skull that is wired to a 100-electrode implant in the visual cortex in the rear of her brain. Using this, Gómez identified ceiling lights, letters, basic shapes printed on paper, and people. She even played a simple Pac-Man–like computer game piped directly into her brain. Four days a week for the duration of the experiment, Gómez was led to a lab by her sighted husband and hooked into the system. Gómez’s first moment of sight, at the end of 2018, was the culmination of decades of research by Eduardo Fernandez, director of neuroengineering at the University of Miguel Hernandez, in Elche, Spain. His goal: to return sight to as many as possible of the 36 million blind people worldwide who wish to see again. Fernandez’s approach is particularly exciting because it bypasses the eye and optical nerves.



Quote for the day:


"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch


Daily Tech Digest - February 07, 2020

Storing for the future: How data centers will advance in 2020


Data center construction is big business, with cloud companies spending over US$150 billion on new construction in the first half of 2019 alone. Does this spell doom for the on-premise server farm. Gartner Research VP David Cappuccio certainly thinks so. In a blog post called “The Data Center is Dead,” the veteran infrastructure researcher asserts his belief that by 2025 no less than 80% of enterprises will have shut down their on-premise data centers. The crux of his argument is that most of the advantages of traditional data centers have evaporated thanks to technological advancements—notably faster data transfer and the greater operational efficiencies at hyper-scale that mammoth server farms enable. The real tipping, though, is at the Edge. Edge data centers are located close to customers’ physical locations, reducing latency. This improves service for more intensive needs like gaming, streaming and cloud computing. Having local nodes allows larger distributed cloud networks to also offer consistent enterprise-quality performance, even outside of high-tier regions like New York and San Francisco.


Why trust and technology go hand in hand

Business leaders also have a responsibility to ensure their wider ecosystem embodies the same values. To do this they must actively engage with their supply chain. At BT, we’re continually working with our suppliers to make sure they’re embracing the same standards as us. This year, we identified 368 high and medium-risk suppliers and followed up with all of them within a three-month period. We can all do more, such as looking deeper, beyond direct suppliers, to expose other risks. Technology is an important part of the solution to this. Blockchain and robotic process automation allows every step of the supply chain to be tracked and verified, while similar technology in the financial space can shine a light on the flow of money, highlighting suspicious behaviour. These examples highlight how trust is a long-term project, and one that must be built over time. It’s about transparency, integrity and collaboration and the balance between allowing innovation to take place while safeguarding privacy and security.


IoT is a gold mine for hackers using fileless malware for cyberattacks

Malware and hacking
The SonicWall researchers found a 5% increase in IoT malware with 34.3 million attacks during 2019. The report lists common IoT security weaknesses as weak or hard-coded passwords, insecure networks and interfaces, and lack of secure update mechanisms. Check Point Research just announced a vulnerability in Philips Smart Hue lightbulbs. A flaw in the firmware allows attackers to take control of an individual bulb, push malicious firmware to it, and spread other malicious software throughout a network. The exploit comes from the Zigbee low-power IoT protocol used for device communication by Philips and many other IoT product manufacturers. It seems obvious, but it's worth repeating this observation from the report, "Given the tenuous landscape regarding data privacy, and the face that everything from nanny cams to doorbell are connected, IoT-focused attacks will only increase in 2020 and beyond." For 2019, SonicWall Capture Labs threat researchers recorded a 52% year-over-year increase in web app attacks and found that the attacks are getting more sophisticated.


Move over, Google: Microsoft's the new Android trailblazer

Android: Google, Microsoft
After failing to serve the Android faithful for ages — first ignoring the platform entirely and then wildly misunderstanding its nature for a while — Microsoft started getting serious about our virtual stomping ground a few years back. It made almost shockingly good versions of its office apps for Android, created its own Android launcher to turn Microsoft into a focal point of the phone-using experience, and then little by little built out an entire Microsoft sub-platform that existed within Google's virtual walls. The company essentially created a Windows Phone 2.0, in other words, only this time doing it in a way that piggybacked off the world's most widely used operating system instead of trying to go up against it. Just like Google, amusingly enough, Microsoft is now taking a post-OS era approach and focusing on ecosystem over operating system. So when we heard last year that Microsoft was building its first self-made Android device, the dual-screened Microsoft Surface Duo, it was clear this wasn't gonna be Yet Another Unremarkable Android Phone. Microsoft was up to something grander here — something intriguingly unusual and decidedly different.


Secret CSO: Emily Heath, DocuSign

There are so many metrics for different aspects of security, most people think of it in terms of incidents but there are so many other facets of what we do. We do track events, investigations and incidents like most companies do, but we also track things like critical vulnerabilities, risks (including 3rd party risks), and a whole bunch of other things. It's part of my job to be able to demonstrate what risk we are carrying at any one moment in time, and risks come in many different categories. ... Read, read, read! I am also a member of a number of security organisations, the wonderful thing about the security community is that everyone shares, no one wants to see anyone else get in the headlines so it's a really strong sharing community. I am also a board advisor for a venture capital firm based in Tel Aviv called Cyberstarts who specialise in early stage seed funding for young entrepreneurs in the cyber space. It's really great to be a part of seeing where the new technology trends are going and working with young entrepreneurs to help shape their products,


Samsung's Galaxy S20 to usher in 5G upgrade cycle, savvy pricing, camera upgrades, futureproof specs


Make no mistake, the Samsung Galaxy S20 launch will be critical to the consumer electronics giant, which is also making a B2B push in 2020. Counterpoint is projecting Galaxy S20 shipments to be at least 40 million units in the first year. That tally should prop up Samsung's smartphone unit, which hasn't delivered a massive upgrade cycle since the Galaxy S7 launch. ... Samsung has become a lot savvier with its pricing models. The Galaxy S10 Lite and Galaxy Note 10 Lite illustrated how Samsung is thinking through pricing. By offering lower price points, Samsung can bring more premium features down market while keeping kitchen sink, high-spec devices high priced. The Galaxy S20 lineup is likely to keep that approach going with Counterpoint betting that the base Galaxy S20 will start at $850. Also, keep in mind that the Galaxy S10 line is likely to see price cuts. It's unclear whether the Note 10 will see price cuts too. ... Aside from Knox for security and DeX for productivity and desktop usage, Samsung is likely to tout 5G's impact on tasks like video conferencing, collaboration and sharing documents.



Q&A on the Book Managing the Unmanageable

Because software is so abstract, many executives don’t really understand it and have false expectations for how long something will take to be coded. It doesn’t help that most programmers are optimists by nature, and tend to underestimate the work to be done. So as a manager you find yourself managing expectations up, even as you’re managing programmers and teams to actually get the work done. Both are challenging tasks. Also, one of the things that makes managing programmers harder than other technical disciplines is that the work is "intangible". That is, it’s "thought stuff". You can’t look at it the way you can a printed circuit board and see that you’re making progress. Finding ways to make the software more tangible, and progress more visible, is very important. ... In Managing the Unmanageable, we talk about some of the great programmers we have known, and how their contributions have been foundational for the companies they worked for. We’ve seen it time and time again – great programmers stand above the rest by their clear contributions.


Senior IT execs blame digital transformation for outage

istock-916376216azurecloud.jpg
"The pressure is mounting for IT leaders to prepare their organizations for the future, but the impact and cost of these transformation initiatives are far greater than anyone realized," said Tej Redkar, LogicMonitor's chief product officer, said in a statement. "Our research finds that the very initiatives that are supposed to be helping modernize global organizations are in fact contributing to an initial spike in outages and brownouts, costing organizations time and money," Redkar said. Of the 300 IT leaders surveyed, 59% believe mobile computing is making brownouts and outages more common; 57% say cloud, artificial intelligence (AI), and edge computing are causing availability issues. Interestingly, perceptions of digital transformation vary according to seniority within the IT teams. Practitioners see IT transformation in a more positive light than senior executives. Nearly 65% of executives, those with a vice president titles and above, say the digital initiatives are making brownouts and outages more common. This contrasts with just 49% of IT practitioners who share the same view.


Could Google carry the BlackBerry torch?

BlackBerry Android Upgrade
Let's be honest, though: BlackBerry, as a phone brand, had basically been on life support for a while. The Android-based BlackBerry phones of recent years were never especially exceptional, and for all the lingering fond feelings toward the name, it didn't seem like many people were actually buying the devices. And that's to say nothing of the brand's dismal performance with delivering Android operating system updates — a harsh reality that was firmly at odds with its security-centric message. Still, if we think back a ways, there was a time when BlackBerry was synonymous with security-first, business-friendly smartphones — devices designed not for Snapchatting or WhatsApping but rather for serious professionals who wanted to get work done, use the best productivity services available, and know that their technology would always be maximally secure. ... Apple, meanwhile, plays up the security card when it's convenient, but few would dispute that Google's services are generally superior. Even critical reviews of Pixel phones tend to talk about how the devices are a delight to use because of their superlative software and the holistic, cohesive-feeling way in which they integrate Google's best properties.


How real-time data is changing governance practices

The automated features of data governance programs simplify the process of profiling data, cataloging files, inferring meaning and detecting schema changes. These programs even automatically check the quality of the data to make sure it is trustworthy and secure before a user implements it. The increasing need for real-time information has, in turn, increased the speed of professionals doing data governance. When using the programs that expedite the process, entire collections of data can be analyzed and profiled automatically instead of one file at a time. Incrementally, the IT Team can meet their needs when new data is discovered. Some sources believe that implementing AI into data governance policies can quicken the data analysis and security processes. AI can detect anomalies in the system by machine learning algorithms and consuming huge amounts of data. It can pick out an abnormal pattern and be able to notify authorities before data can be compromised. 



Quote for the day:


"Individual commitment to a group effort - that is what makes a team work, a company work, a society work, a civilization work." -- Vince Lombardi


Daily Tech Digest - February 06, 2020

Is your CISO stressed? According to Nominet, they are

Is your CISO stressed? According to Nominet, they are image
Overworked CISOs would sacrifice their salary for a better work-life balance, according to the research. Investigating the causes of CISO stress, the research found that almost all CISOs are working beyond their contracted hours, on average by 10 hours per week. And, the report suggests that even when they are not at work many CISOs feel unable to switch off. As a result, CISOs reported missing family birthdays, holiday, weddings and even funerals. They’re also not taking their annual leave, sick days or time for doctor appointments — contributing to physical and mental health problems. The key findings: 71% of CISOs said their work-life balance is too heavily weighted towards work; 95% work more than their contracted hours — on average, 10 hours longer a week — which means CISOs are giving organisations $30,319 (£23,503) worth of extra time per year; Only 2% of CISOs said they were always able to switch off from work outside of the office, with the vast majority (83%) reporting that they spend half their evenings and weekends or more thinking about work.



This latest phishing scam is spreading fake invoices loaded with malware


The attachment claims the user needs to 'enable content' in order to see the document; if this is done it allows malicious macros and malicious URLs to deliver Emotet to the machine. Because Emotet is such a prolific botnet, the malicious emails don't come from any one particular source, but rather infected Windows machines around the world. If a machine falls victim to Emotet, not only does the malware provide a backdoor into the system, allowing attackers to steal sensitive information, it also allows the attackers to use the machine to spread additional malware – or allow other hackers to exploit compromised PCs for their own gain. The campaign spiked towards the end of January and while activity has dropped for now, financial institutions are still being targeted with Emotet phishing campaigns. "We are continuing to see Emotet traffic, though the intensity has reduced considerably," Krishnan Subramanian, researcher at Menlo Labs told ZDNet. In order to protect against Emotet malware, it's recommended that users are wary of documents asking them to enable macros, especially if it's from an untrusted or unknown source. Businesses can also disable macros by default.


Research network for ethical AI launched in the UK


The initiative is being led by the Ada Lovelace Institute, an independent data and AI think tank, in partnership with the Arts and Humanities Research Council (AHRC), and will also seek to inform the development of policy and best practice around the use of AI. “The Just AI network will help ensure the development and deployment of AI and data-driven technologies serves the common good by connecting research on technical solutions with understanding of social and ethical values and impact,” said Carly Kind, director of the Ada Lovelace Institute. “We’re pleased to be working in partnership with the AHRC and with Alison Powell, whose expertise in the interrelationships between people, technology and ethics make her the ideal candidate to lead the Just AI network.” Powell, who works at the London School of Economics (LSE), specifically researches how people’s values influence how technology is built, as well as how it changes the way we live and work.


How Can We Make Election Technology Secure?

Simplified view of the chain of voting devices.  
Graphic by Ives Brant, TrustiPhi
Let's start with some common problems presented by modern-day election machines. Single point of failure. A compromise or malfunction of election technology could decide a presidential election. Between elections. Election devices might be compromised while they are stored between elections. Corrupt updates. Any pathway for installing new software in voting machines before each election, including USB ports, may allow corrupt updates to render the system untrustworthy. Weak system design. Without clear guidelines and thorough, expert evaluation, the election system is likely susceptible to many expected and unexpected attacks. Misplaced trust. Technology is not a magic bullet. Even voting equipment from leading brands has delivered wildly wrong results in real elections. Election administrators need to safeguard the election without relying too heavily on third parties or technologies they don't control. It takes a lot of work to lock down a complex voting system to the point where you'd bet the children's college fund — or the future of society — on its safety.


The Human-Powered Companies That Make AI Work

Machine learning models require human labor for data labeling
Machine learning is what powers today’s AI systems. Organizations are implementing one or more of the seven patterns of AI, including computer vision, natural language processing, predictive analytics, autonomous systems, pattern and anomaly detection, goal-driven systems, and hyperpersonalization across a wide range of applications. However, in order for these systems to be able to create accurate generalizations, these machine learning systems must be trained on data. The more advanced forms of machine learning, especially deep learning neural networks, require significant volumes of data to be able to create models with desired levels of accuracy. It goes without saying then, that the machine learning data needs to be clean, accurate, complete, and well-labeled so the resulting machine learning models are accurate. Whereas it has always been the case that garbage in is garbage out in computing, it is especially the case with regards to machine learning data.


cloud security / data protection / encryption / security transition
There are multiple IaC frameworks and technologies, the most common based on Palo Alto's collection effort being Kubernetes YAML (39%), Terraform by HashiCorp (37%) and AWS CloudFormation (24%). Of these, 42% of identified CloudFormation templates, 22% of Terraform templates and 9% of Kubernetes YAML configuration files had a vulnerability. Palo Alto's analysis suggests that half the infrastructure deployments using AWS CloudFormation templates will have an insecure configuration. The report breaks this down further by type of impacted AWS service -- Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (RDS), Amazon Simple Storage Service (Amazon S3) or Amazon Elastic Container Service (Amazon ECS). ... The absence of database encryption and logging, which is important to protect data and investigate potential unauthorized access, was also a commonly observed issue in CloudFormation templates. Half of them did not enable S3 logging and another half did not enable S3 server-side encryption.


Serverless computing: Ready or not?

binary code vortex
By nature, serverless computing architectures tend to be more cost-effective than alternative approaches. "A core capability of serverless is that it scales up and down to zero so that when it’s not being used you aren’t paying for it," Austin advises. With serverless technology, the customer pays for consumption, not capacity, says Kevin McMahon, executive director of mobile and emerging technologies at consulting firm SPR. He compares the serverless model to owning a car versus using a ride-sharing service. "Prior to ride sharing, if you wanted to get from point A to B reliably you likely owned a car, paid for insurance and had to maintain it," he explains. "With ride-sharing, you no longer have to worry about the car, you can just pay to get from A to B when you want—you simply pay for the job that needs to be done instead of the additional infrastructure and maintenance." Serverless computing can also help adopters avoid costs related to the overallocation of resources, ensuring that expenses are in line with actual consumption, observes Craig Tavares, head of cloud at IT service management company Aptum.


Oops! Microsoft gets 'black eye' from Teams outage

fail epic fail disaster disrupt upset angry thinkstock
“This is definitely a black eye for Microsoft, especially when it has touted its reliability in the wake of some high-profile Slack outages in the last couple of years,” said Irwin Lazar, vice president and Service Director at Nemertes Research. “It is surprising that Microsoft didn't renew its certificate, and it shows that as Teams rapidly grows they will have to ensure they are addressing operational issues to prevent further downtime.” Indeed, the prompt reaction to the outage is an indication of the growing importance of Teams as more and more office workers rely on team messaging tools. “There is nothing like taking a service down to illustrate its popularity and importance. However, this is not a best practice we recommend,” Larry Cannell, a research director at Gartner, dryly noted. An SSL certificate enables a secure connection between a web browser or app and a server, and is required for HTTPS-enabled sites. It helps protect users against security risks such as man-in-the-middle attacks by allowing data to be encrypted. When a certificate expires, the server can’t be identified and information cannot be sent. That was the case with Teams on Monday.


Looking to hire a '10x developer'? You can try, but it probably won't boost productivity


As Nichols notes in a blog, various studies since the 1968 one have estimated that top-performing developers are between four and 28 times more productive than average performers. But Nichols says his study found evidence to contradict the idea that some programmers are inherently far more skilled or productive than others. Performance differences are partly attributable to the skill of an individual, he writes, but each person's productivity also varies every day, depending on the task and other factors. "First, I found that most of the differences resulted from a few, very low performances, rather than exceptional high performance. Second, there are very few programmers at the extremes. Third, the same programmers were seldom best or worst," he explains. He argues that these findings should change the way a software project manager approaches recruitment. For example, they shouldn't necessarily just focus on getting the top programmers to boost organizational productivity, but find "capable" programmers and develop that talent. The study involved 494 students with an average of 3.7 years' industry experience. The students used the C programming language and were tasked with programming solutions through a set of 10 assignments.


5 steps to creating a strong data archiving policy

Suppose you decide to archive data that hasn't been modified or accessed in three years. That decision leads to a number of other questions related to the data management. For example, should all the data that meets the three-year criteria be archived, or can some types of data simply be deleted rather than archived? Likewise, will data remain in your archives forever or will the data be purged at some point? You must have specific plans that address the exact circumstances under which data should be archived, as well as a plan for what will eventually happen to archived data. Many companies assume that having a data archiving policy means they have a deletion policy; they eventually wind up wishing they had spelled out the specifics of deletion and archival. ... Regulatory compliance is also critical. Not every organization is subject to federal regulatory requirements surrounding data retention policy, but those that are can face severe penalties if they fail to properly retain required data. Multinational companies also must be aware of varying regulatory policies.



Quote for the day:


"Leadership does not always wear the harness of compromise." -- Woodrow Wilson