Firstly, framing the conversation like this doesn’t get us anywhere. Are football players to blame when they lose a match? Well, in a way, but the players are also to ‘blame’ when they win. And even when they do lose, telling them that they’re the problem is only going to demoralize and lead to further losses. Secondly, if blame has to lie somewhere, it surely lies with the security awareness programs rather than the employees who rely on those programs to better protect themselves. The reason that human-error breaches continue to occur at such at rate is that – and let’s be honest here – security awareness training in its current form just doesn’t work. Training doesn’t work because, in most cases, it focuses solely on awareness. Awareness is all well and good, but increased awareness by itself is not what necessarily matters. Just because people are ‘aware’ of cyber risks doesn’t mean that, in the real world, they will behave in a more secure way.
The top priority for CIOs is to ensure companies can manage the huge and sudden spike in demand for remote-working capacity caused by the closure of offices and other facilities. “This required my team to make some adjustments to the way we supply necessary equipment and remote access to [our] networks,” explains Kota, who says Autodesk has created a self-service toolkit so that many more employees can quickly set themselves up to work remotely if the need arises. Nikolaj Sjoqvist, the chief digital officer of Waste Management, a $47 billion waste-management and environmental-services giant, says it has increased the number of licenses available for virtual private networks and is scaling up its networking capacity to support more remote work. Sjoqvist is also tapping cloud-based applications and services that can quickly be spun up to support the effort. For employees used to working on desktops, his team is leveraging virtual desktop infrastructure technology to give them access to applications on their personal computers.
Data governance is just one part of the overall discipline of data management, though an important one. Whereas data governance is about the roles, responsibilities, and processes for ensuring accountability for and ownership of data assets, DAMA defines data management as "an overarching term that describes the processes used to plan, specify, enable, create, acquire, maintain, use, archive, retrieve, control, and purge data. While data management has become a common term for the discipline, it is sometimes referred to as data resource management or enterprise information management. Gartner describes EIM as "an integrative discipline for structuring, describing, and governing information assets across organizational and technical boundaries to improve efficiency, promote transparency, and enable business insight." ... BARC warns that data governance is not a "big bang initiative." As a highly complex, ongoing program, data governance runs the risk of participants losing trust and interest over time. To counter that, BARC recommends starting with a manageable or application-specific prototype project and then expanding data governance across the company based on lessons learned.
Businesses are far more concerned with security, data privacy and compliance than ever before, and rightfully so. The average cost of a data breach today is $3.9 million, according to IBM. As the ever-growing wave of security and privacy incidents continues, we’ve seen legislative reactions such as GDPR or the California Consumer Privacy Act (CCPA) emerge. A decade ago, CIOs would typically manage all aspects of data security and privacy based on the advice of a dedicated information security specialist. The sheer level of regulatory, financial and reputational damage at stake has shifted those responsibilities to chief information security officers (CISO). Prominent board advisers, CISOs are responsible for mitigating security and privacy risks, maintaining compliance, and preventing incidents from impacting the business. In the past, CIOs would typically be responsible for collecting, organizing and retroactively reporting on company data. Now, we view data as a business enabler that can highlight meaningful trends, provide predictive models and help maximize efficiency and profitability.
Gone are the days when on-premises versus cloud was a hot topic of debate for enterprises. Today, even conservative organizations are talking cloud and open source. No wonder cloud platforms are revamping their offerings to include AI/ML services. With ML solutions becoming more demanding in nature, the number of CPUs and RAM are no longer the only way to speed up or scale. More algorithms are being optimized for specific hardware than ever before – be it GPUs, TPUs, or “Wafer Scale Engines.” This shift towards more specialized hardware to solve AI/ML problems will accelerate. Organizations will limit their use of CPUs – to solve only the most basic problems. The risk of being obsolete will render generic compute infrastructure for ML/AI unviable. That’s reason enough for organizations to switch to cloud platforms. The increase in specialized chips and hardware will also lead to incremental algorithm improvements leveraging the hardware. While new hardware/chips may allow use of AI/ML solutions that were earlier considered slow/impossible, a lot of the open-source tooling that currently powers the generic hardware needs to be rewritten to benefit from the newer chips.
For many commentators, the security implications of opening up account data is a top concern, but open banking poses many more challenges than this. A detailed inspection of the small print of both PSD2 and the FCA’s new guidelines for payment service providers shows that the legislation has repercussions far beyond security, that are not so well understood by many in the financial services sector. The complexities are so vast that compliance officers may even be scratching their heads in bewilderment. Here’s a few things you need to be aware of. There are now two new classes of payment service providers under PSD2. In addition to standard banks and building societies, PSD2 recognises account information service providers (AISPs) and payment initiation service providers (PISPs). The latter offer services such as bill payment and peer-to-peer transfers, by initiating “a payment from the user account to the merchant account by creating a software bridge”. The former, meanwhile, provide aggregated bank account information and analysis services. PSD2 applies to non-EU transactions where one leg is carried out by a PSP outside Europe, in addition to those taking place on EU soil.
As expectations and pressures evolve, corporate directors have been taking action: adding risk committees, ensuring there is a critical mass of risk expertise on the board, measuring how much attention they pay to risks, and understanding how cultural dynamics affect risk decisions. A recent Spencer Stuart study found that 12 percent of S&P 500 companies had risk committees in 2019 — a small number, but up from 9 percent in 2014. Finance and utility companies were by far the most likely to have risk committees, in no small part for regulatory reasons. But the vast majority — more than 95 percent — of S&P 500 companies assess the performance of their board of directors annually, as do 80 percent of companies in the Russell 3000, according to a 2019 report (pdf) by the Conference Board and data-mining firm ESGAUGE. There is evidence that boards are reacting to assessments. In PwC’s 2019 Annual Corporate Directors Survey, for example, an impressive 72 percent of directors said their boards made changes in response to the last board performance assessment — up from 49 percent just three years earlier.
The report found that hackers are no longer using fake invoices to trick businesspeople. Now they are pretending to be company employees asking partners to take action. In December, cybercriminals compromised the account of an employee at a Chinese venture capital firm. They spoofed the domain of an Israeli startup the Chinese firm had been working with and managed to steal $1 million in funding meant for the Israeli company. Trend Micro shared an example of a BEC email caught by the Cloud App Security platform. The email supposedly from the CEO included phrases like "No one else except us must be informed at this time," and "First, provide me immediately the available cashflow of our bank account," and "As soon as I receive those information, I will share with you further instructions." Bad actors also are using new credential phishing techniques, including malicious voice mails and shared files. One phishing campaign in July 2019 used fake OneNote Online pages hosted on a SharePoint subdomain that linked to a fake Microsoft login page.
Project OWL (Organization, Whereabouts, and Logistics) creates a mesh network of Internet of Things (IoT) devices called DuckLinks. These Wi-Fi-enabled devices can be deployed or activated in disaster areas to quickly re-establish connectivity and improve communication between first responders and civilians in need. In OWL, a central portal connects to solar- and battery-powered, water-resistant DuckLinks. These create a Local Area Network (LAN). In turn, these power up a Wi-Fi captive portal using low-frequency Long-range Radio (LoRa) for Internet connectivity. LoRA has a greater range, about 10km, than cellular networks. LoRa also avoids the danger of having its bandwidth throttled by cellular carriers. That, by the way, actually happened in 2018 in Northern California's Mendocino Complex Fire when Verizon slowed the first responders' internet. DuckLinks then provides an emergency mesh network to all Wi-Fi enabled devices in range. This can be used both by people needing help and first responders trying to get a grip on the situation with data analytics. Armed with this information, they can then formulate an action plan.
One of the biggest ways in which we can see AI-assisted cyber attacks affecting our daily lives is through Twitter. We’ve all heard one political party or another accusing the other of using "bots" to misrepresent arguments or make it seem like certain factions had more followers than they actually did. Bots by themselves aren’t a huge deal, and lots of companies and services use bots to drive customer engagement and funnel people through different areas of the website. We’ve all seen the bot-powered chat boxes on sites where you might have a question, like the homepage of a college. But the real issue with bots is that they are becoming more sophisticated. In an ironic twist to the Turing test, it’s becoming increasingly difficult for people to tell bots apart from real people, even though machines once almost universally failed the exam. Google has recently provided higher metrics for AI-generated audio and video, demonstrating this trend. These bots can pretty easily be used for misinformation, like when users marshal them to flood a Twitter thread with false posters to influence an argument.
Quote for the day:
"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson