Showing posts with label survey. Show all posts
Showing posts with label survey. Show all posts

Daily Tech Digest - May 20, 2020

How IT and Security Leaders Are Addressing the Current Social & Economic Landscape


Despite the security and overall organizational preparedness concerns, IT and security leaders share some notes of encouragement. The majority (68%) of IT leaders agree that their technology infrastructure was prepared to adequately address employees working from home. On an even brighter note, 81% of security leaders believe that their existing security infrastructure can adequately address the current working from home demands, and 67% feel that their security infrastructure is fully prepared to handle the range of risks associated. As more and more individuals are getting their jobs done from home, 71% of IT leaders say that the current situation has created a more positive view of remote workplace policies and will likely impact how they plan for office space, tech staffing and overall staffing in the future. In order to address the new work environment due to COVID-19, 44% of IT leaders will need to acquire new technology solutions and services.



Hackers Hit Food Supply Company

DarkOwl said its analysis shows the attackers have managed to steal some 2,600 files from Sherwood. The stolen data includes cash-flow analysis, distributor data, business insurance content, and vendor information. Included in the dataset are scanned images of driver's licenses of people in Sherwood's distribution network. The threat actors posted screen shots of a chat they had with Coveware, a ransomware mitigation firm that Sherwood had hired to help deal with the crisis. The conversation shows that Sherwood has been dealing with the attack since at least May 3rd , according to DarkOwl's research. The screenshots also suggest that Sherwood at one point was willing to pay $4.25 million and later $7.5 million to get its data back. In an emailed statement, a Sherwood spokeswoman said the company does not comment on active criminal investigations. ... According to DarkOwl, on Monday the attackers updated Happy Blog with news of their plan to next auction off personal data belonging to Madonna.


5 Ways to Detect Application Security Vulnerabilities Sooner to Reduce Costs and Risk

appsec
Human error is always a security concern, especially when it comes to credentials. Just consider how many times you’ve heard of developers committing code only to later realize they’d accidentally included a password. These errors can lead to high-cost consequences for organizations. There are many tools that scan for secrets and credentials that can be accidentally committed to a source code repository. One example is Microsoft Credential Scanner (CredScan). Perform this scan in the PR/CI build to identify the issue as soon as it happens so they can be changed before this becomes a problem. Once an application is deployed, you can continue to scan for vulnerabilities through the following automated continuous delivery pipeline capabilities. Unlike SAST, which looks for potential security vulnerabilities by examining an application from the inside—at the source code—Dynamic Application Security Testing (DAST) looks at the application while it is running to identify any potential vulnerabilities that a hacker could exploit.


MySQL DB
For me, it is that asynchronous programming is such a paradigm shift in a system architecture that it should be analyzed very differently from a “synchronous” system. We analyzed response times but never thought how many concurrent requests there would be at any point because, in a synchronous system, the calling system is itself limited in how many concurrent calls it can generate, because of threads getting blocked for every request. This is not true for asynchronous systems, and hence a different mental model is required to understand causes and outcomes. Any large software system (especially in the current environment of dependent microservices) is essentially a data flow pipeline and any attempt to scale which does not expand the most bottlenecked part of the pipeline is useless in increasing data flow. We thought of pushing a huge amount of data through our pipeline by making Armor alone asynchronous and failed to distinguish between a matter of Speed (doing this faster) from a matter of Volume (doing a lot of it at the same time).


The downside of resilient leadership


Where does resilience come from? It’s a muscle that can be developed early on through a strong family life or a mentor relationship, or from positive experiences that help ready children and young adults for life’s tests in later years. But resilience is often also forged at young ages through adverse experiences that force children to rely on what psychologists call an “internal locus of control,” a concept developed in the 1950s by American psychologist Julian Rotter. When challenged, these young people decide that they are going to be in charge of their own fate and not let their circumstances define them. ... One of the messages these future leaders told themselves, or that was hammered into them by a parent, was “don’t be a victim.” Nobody would wish tough circumstances on another person, and yet it was in the moments of being tested that they discovered what they were made of. Adversity built a quiet confidence in them, because they went through tough times and knew they could do it again.


Why the cloud journey is hard

Cloud-journey
Cloud journey- Conway’s Law states: “The structure of any system designed by an organisation is isomorphic to the structure of the organisation,” which means software or automated systems end up shaped like the organisational structure they’re designed in or designed for, according to Wikipedia. This could be why some organisations find it difficult to fully embrace cloud adoption as certain legacy organisational structures just don’t fit into a more demanding agile oriented cloud environment. Nico Coetzee, Enterprise Architect for Cloud Adoption and Modern IT Architecture at Ovations, elaborates: “Every company that embarks on its cloud journey can count on some deliverables not going as planned. There are many reasons for the failure of certain modernisation projects and cloud journeys, but it might come as a surprise to hear that the most common reason could be as simple as traditional structures.” If we go back to Melvin E Conway’s research on ‘How do committees invent?’ from 1967, there are some key insights.


Executive AI Fluency – Ending the Cycle of Failed AI Proof-of-Concept Projects

Executive AI Fluency
Executives cannot understand AI in a purely conceptual fashion. They need practical use-cases for the types of AI projects they are brainstorming – and it is even better (at least initially) to have examples within their industry or related industries. One example of a strong AI use-case in banking is fraud detection. Some banks and AI vendors report to have lowered their rate of false-positive results for financial fraud using predictive analytics solutions. A wide range of use cases allows leadership to better detect where AI opportunities might lie within the company and decide which projects deserve the most attention of the many that could be applied. Banking leaders should be able to expect a chatbot solution to provide their customers basic answers to common and simple questions. Bank leadership should not expect their chatbot to be able to handle complex conversations, or draw upon rich context from previous email or phone conversations with the client. The technology is simply not at that level today. In this way working with AI is more strategic than the “plug and play” nature of IT solutions.


US Treasury Warning: Beware of COVID-19 Financial Fraud

US Treasury Warning: Beware of COVID-19 Financial Fraud
FinCEN notes that medical-related fraud scams, including fake cures, tests, vaccines and services, may require customers to pay via a pre-paid card instead of a credit card; require the use of a money services business or convertible virtual currency; or require that the buyer send funds via an electronic funds transfer to a high-risk jurisdiction. The agency notes that scams involving nondelivery of medical-related goods often occur through websites, robocalls or on the darknet. Scams involving price gouging include cases where individuals have been selling surplus items or newly acquired bulk shipments of goods - such as masks, disposable gloves, isopropyl alcohol, disinfectants, hand sanitizers, toilet paper and other paper products - at inflated prices, FinCEN explains. "Payment methods vary by scheme and can include the use of pre-paid cards, money services businesses, credit card transactions, wire transactions, or electronic fund transfers," it notes. ... "FinCEN is correct in its assertion that there will be a huge increase in all types of cybercrimes, especially related to medical scams and related cyberattacks, says former FBI agent Jason G. Weiss


How the UK pensions industry is paving the way for open data sharing ecosystems

The UK pensions industry and the rise of open data sharing ecosystems image
While some questions remain over how the regulatory standards from the pensions dashboard and Open Banking (a separate regulation focused on building transparency and open sharing into the banking industry) can be applied to a wider Open Finance initiative, the pension dashboard’s architecture — federated digital identity, UMA, and interoperability through secure Open APIs — provides a viable model for Open Finance. Crucially, these technologies conform to open standards, meaning the architecture that underpins them can be updated and synced with any new technology, preventing the formation of any legacy systems and allowing for consistent innovation. When adopted across the financial services ecosystem, they would create a variety of secure, trustworthy, and user-friendly tools that would empower users to engage more meaningfully with their finances. Picture it: financial advisors and brokers could deliver important financial advice more completely, immediately, and visibly through the kind of seamless user experiences that are currently the preserve of digital native sectors.


NCSC discloses multiple vulnerabilities in contact-tracing app

The encryption vulnerability in the beta app has arisen because the app does not encrypt proximity contact event data, and the data is not independently encrypted before it is sent to the central servers. This, said Levy, means that when data is transferred to the back-end, it is only protected by the transport layer security (TLS) protocol, so that if Cloudflare was compromised in some way, cyber criminals could access that data. He pointed out that this was something else that was sacrificed at first because of the need for speed. Finally, Levy noted some ambiguities and errors in statements made about the beta app. Among these was a statement that “the infrastructure provider and the healthcare service can be assumed to be the same entity”. This suggests that the NCSC trusts the network bridging the gap between user devices and the central NHS servers in the same way as it trusts the whole of the NHS, which is clearly not the case.



Quote for the day:


"You must learn to rule. It's something none of your ancestors learned." -- Frank Herbert


Daily Tech Digest - May 15, 2020

The Past, Present, and Future of API Gateways


AJAX (Asynchronous JavaScript and XML) development techniques became ubiquitous during this time. By decoupling data interchange from presentation, AJAX created much richer user experiences for end users. This architecture also created much “chattier” clients, as these clients would constantly send and receive data from the web application. In addition, ecommerce during this era was starting to take off, and secure transmission of credit card information became a major concern for the first time. Netscape introduced Secure Sockets Layer (SSL) -- which later evolved to Transport Layer Security (TLS) -- to ensure secure connections between the client and server. These shifts in networking -- encrypted communications and many requests over longer lived connections -- drove an evolution of the edge from the standard hardware/software load balancer to more specialized application delivery controllers (ADCs). ADCs included a variety of functionality for so-called application acceleration, including SSL offload, caching, and compression. This increase in functionality meant an increase in configuration complexity.


Adapting Cloud Security and Data Management Under Quarantine

Image: WrightStudio - stock.Adobe.com
The current state of affairs is not something envisioned by many business continuity plans, says Wendy Pfeiffer, CIO of Nutanix. Most organizations are operating in a hybrid mode, she says, with infrastructure and services running in multiple clouds. This can include private clouds, SaaS apps, Amazon Web Services, Microsoft Azure, and Google Cloud Platform. Though this specific situation may not have been planned for, the cloud allows for unexpected needs to scale and pivot, Pfeiffer says. “Maybe we envisioned a region being inaccessible but not necessarily every region all at once.” Normally it can be easy to declare standards within IT, she says, and instrument an environment to operate in line with those standards to maintain control and security. Losing control of that environment under quarantines can be problematic. “If everyone suddenly pivots to work from home, then we no longer control the devices people use to access the network,” Pfeiffer says. Such disruption, she says, makes it difficult to control performance, security, and the user experience.


While 78 per cent of organisations said they are using more than 50 discrete cybersecurity products to address security issues, 37 per cent used more than 100 cybersecurity products. Organisations who discovered misconfigured cloud services experienced 10 or more data loss incidents in the last year, according to the report. IT professionals have concerns about cloud service providers. Nearly 80 per cent are concerned that cloud service providers they do business with will become competitors in their core markets. "Seventy-five per cent of IT professionals view public cloud as more secure than their own data centres, yet 92 per cent of IT professionals do not trust their organization is well prepared to secure public cloud services," the findings showed. Nearly 80 per cent of IT professionals said that recent data breaches experienced by other businesses have increased their organization's focus on securing data moving forward.


Continuous Security Through Developer Empowerment

Continuous Security Through Developer Empowerment
Before DevOps kicked in, app performance monitoring (APM) was owned by IT, who used synthetic measurements from many points around the world to assess and monitor how performant an application was. These solutions were powerful, but their developer experience was horrible. They were expensive, which limited tests developers could run. They excelled in explaining the state through aggregating tests, but offered little value to a developer trying to troubleshoot a performance problem. As a result, developers rarely used them. Then, New Relic came on the scene, introducing a different approach to APM. Their tools were free or cheap to start with, making it accessible to all dev teams. They used instrumentation to offer rich results in developer terms (call stacks, lines of code), making them better for fixing problems. This new approach revolutionized the APM industry, embedded performance monitoring into dev practices and made the web faster. The same needs to happen for application security.


Data security guide: Everything you need to know

The move to the cloud presents an additional threat vector that must be well understood in respect to data security. The 2019 SANS State of Cloud Security survey found that 19% of survey respondents reported an increase in unauthorized access by outsiders into cloud environments or cloud assets, up 7% since 2017. Ransomware and phishing also are on the rise and considered major threats. Companies must secure data so that it cannot leak out via malware or social engineering. Breaches can be costly events that result in multimillion-dollar class action lawsuits and victim settlement funds. If companies need a reason to invest in data security, they need only consider the value placed on personal data by the courts. Sherri Davidoff, author of Data Breaches: Crisis and Opportunity, listed five factors that increase the risk of a data breach: access; amount of time data is retained; the number of existing copies of the data; how easy it is to transfer the data from one location to another -- and to process it; and the perceived value of the data by criminals.


This new, unusual Trojan promises victims COVID-19 tax relief


The malware is unusual as it is written in Node.js, a language primarily reserved for web server development. "However, the use of an uncommon platform may have helped evade detection by antivirus software," the team notes. The Java downloader, obfuscated via Allatori in the lure document, grabs the Node.js malware file -- either "qnodejs-win32-ia32.js" or "qnodejs-win32-x64.js" -- alongside a file called "wizard.js." Either a 32-bit or 64-bit version of Node.js is downloaded depending on the Windows system architecture on the target machine. Wizard.js' job is to facilitate communication between QNodeService and its command-and-control (C2) server, as well as to maintain persistence through the creation of Run registry keys. After executing on an impacted system, QNodeService is able to download, upload, and execute files; harvest credentials from the Google Chrome and Mozilla Firefox browsers, and perform file management. In addition, the Trojan can steal system information including IP address and location, download additional malware payloads, and transfer stolen data to the C2.


Quantum computing analytics: Put this on your IT roadmap


"There are three major areas where we see immediate corporate engagement with quantum computing," said Christopher Savoie, CEO and co-founder of Zapata Quantum Computing Software Company, a quantum computing solutions provider backed by Honeywell. "These areas are machine learning, optimization problems, and molecular simulation." Savoie said quantum computing can bring better results in machine learning than conventional computing because of its speed. This rapid processing of data enables a machine learning application to consume large amounts of multi-dimensional data that can generate more sophisticated models of a particular problem or phenomenon under study. Quantum computing is also well suited for solving problems in optimization. "The mathematics of optimization in supply and distribution chains is highly complex," Savoie said. "You can optimize five nodes of a supply chain with conventional computing, but what about 15 nodes with over 85 million different routes? Add to this the optimization of work processes and people, and you have a very complex problem that can be overwhelming for a conventional computing approach."


COBIT Tool Kit Enhancements

The value of this tool is that it provides a convenient means of quickly assessing and assigning relevant roles to practices across the 40 COBIT objectives. COBIT promotes using a common language and common understanding among practitioners. Common terminology facilitates communication and mitigates opportunities for error. Using RACI charts and the new COBIT Tool Kit spreadsheet provides the guidance to help practitioners extract the COBIT practices relevant for each job role. Another benefit of compiling all practices into a single RACI chart is that metrics reporting can be better assessed. A user can filter all practices by accountability of a single role and then compare metrics reporting on those practices and determine whether sufficient coverage has been created. An assessment of that type is not as effective when RACIs are developed at the higher, objective, level. The new spreadsheet can be found in the complementary COBIT 2019 Tool Kit. The tool kit is available on the COBIT page of the ISACA website.


Build your own Q# simulator – Part 1: A simple reversible simulator


Simulators are a particularly versatile feature of the QDK. They allow you to perform various different tasks on a Q# program without changing it. Such tasks include full state simulation, resource estimation, or trace simulation. The new IQuantumProcessor interface makes it very easy to write your own simulators and integrate them into your Q# projects. This blog post is the first in a series that covers this interface. We start by implementing a reversible simulator as a first example, which we extend in future blog posts. A reversible simulator can simulate quantum programs that consist only of classical operations: X, CNOT, CCNOT (Toffoli gate), or arbitrarily controlled X operations. Since a reversible simulator can represent the quantum state by assigning one Boolean value to each qubit, it can run even quantum programs that consist of thousands of qubits. This simulator is very useful for testing quantum operations that evaluate Boolean functions.


Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

This article describes Diligent Engine, a light-weight cross-platform graphics API abstraction layer that is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C/C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. ... As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions, etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors.



Quote for the day:


"Different times need different types of leadership." -- Park Geun-hye


Daily Tech Digest - April 04, 2020

"Unlike regular times when you could dispatch a technician to hospitals, or you could actually show the doctors how to operate equipment, fix it, and so on, they need to do it remotely," Churchill said. "So we combined them with video and AR." Once TechSee receives an inquiry, it is given to a technician and the technician sends a web link via SMS to a hospital staff member. This allows the hospital support person to use their smartphone camera or tablet camera to show the technician the issue, Churchill noted. The user shows the technician the problem, and then the technician diagnoses the issue and uses AR to visually guide the hospital employee to a resolution, he added. Churchill said that TechSee works with more than 100 enterprises in a variety of sectors, with Medtechnica being one of its biggest clients in healthcare. While TechSee's solution can be applied to any system--including X-rays, routers, smart thermostats, and more--the demand for ventilators is amplifying that use case. This solution is completely web-based, so the user isn't forced to download an app. The AI-powered platform can recognize devices and technical issues, as well as automate the support process, Churchill said.


Very rarely, can risk be completely eliminated. However, inherent risk can be mitigated through a combination of risk mitigation strategies, risk shifting, and at the end of the day, acceptance of the residual risk. When addressing big data risks, in particular, two types of risks must be discussed: the risk of data breaches and the risk of data misuse. The former is addressed through data security, while the latter is most commonly addressed through data privacy and regulation. When it comes to data security, one of the most significant sources of risk is the overreliance on fairly immutable data elements for identification such as, for example, social security number, names, addresses, dates of birth, credit card numbers, and the like. When any long-lived data element is exposed and misused, the damage is usually broad and long-lasting because changing those data elements is difficult and costly. The mechanism that I’m referring to is known as public-key cryptography and digital signatures, which was invented in the ’80s. While this is widely spread as the method that web browsers use to identify websites (adding the “secure” or “SSL/TLS” labels to the URL bar), it has not had enough traction outside of that specific domain.


secured vpn tunnel
For one, the WireGuard protocol does away with cryptographic agility -- the concept of offering choices among different encryption, key exchange and hashing algorithms -- as this has resulted in insecure deployments with other technologies. Instead the protocol uses a selection of modern, thoroughly tested and peer-reviewed cryptographic primitives that result in strong default cryptographic choices that users cannot change or misconfigure. If any serious vulnerability is ever discovered in the used crypto primitives, a new version of the protocol is released and there’s a mechanism of negotiating protocol version between peers. WireGuard uses ChaCha20 for symmetric encryption with Poly1305 for message authentication, a combination that’s more performant than AES on embedded CPU architectures that don’t have cryptographic hardware acceleration; Curve25519 for elliptic-curve Diffie-Hellman (ECDH) key agreement; BLAKE2s for hashing, which is faster than SHA-3; and a 1.5 Round Trip Time (1.5-RTT) handshake that’s based on the Noise framework and provides forward secrecy. It also includes built-in protection against key impersonation, denial-of-service and replay attacks, as well as some post-quantum cryptographic resistance.


How to start your career in cyber security

Unlike many professions, you don’t need cyber security experience to get into the field, although many people entering the field will come from jobs that have similar skillsets, such as systems administration or information analysis. If you can demonstrate the relevance of your existing experience – what recruiters call ‘transferable skills’ – there’s no reason why you can’t get a foothold on the cyber security career ladder. There are also plenty of entry-level positions available. Account executives and junior penetration testers, for example, tend to have little work experience, and can learn while on the job. ... The best way to gain an advantage over other prospective cyber security professionals is to become qualified. The qualifications you need will depend on your career path. If you don’t have this mapped out yet, or you simply want a strong overall understanding of how to navigate security risks, you should seek out a course that covers general topics, such as our Certified Cyber Security Foundation Training Course. This one-day course explains the fundamentals of cyber security and shows you how to protect your organisation from a range of threats.


Is COVID-19 Driving a Surge in Unsafe Remote Connectivity?

Is COVID-19 Driving a Surge in Unsafe Remote Connectivity?
As more organizations shift to a remote workforce, new working patterns and technology adoption - including shadow IT - may lead to corporate data suddenly being poorly secured or stored in a manner that violates regulatory requirements. And more systems may be spun up that fail to secure commonly used protocols, such as RDP. "Changes to the network perimeter can also create unanticipated threats, as a higher burden is placed on remote-access systems, and if not correctly implemented, may expose systems to the internet," says Matt Linney, a senior security consultant at 7 Elements. "Looking at this now could save substantial loss in the future." The problem may be exacerbated by COVID-19 driving many organizations to rapidly embrace the equivalent of bootstrap approaches to digital transformation and moving to cloud-based platforms and core services without having first carefully planned, tested, validated and secured their approach (see: Zoom Fixes Flaw That Could Allow Strangers Into Meetings).



Why Continuous Monitoring of Critical Data Is So Essential

To ensure business continuity, manufacturers in India that now have a 100 percent remote workforce because of the COVID-19 pandemic must be vigilant about ensuring critical data is protected through continuous monitoring, says Ravikiran S. Avvaru, head of IT and security at the Gurgaon-based manufacturing group Apollo Tyres Ltd. "As part of our business continuity plan, we identified critical applications for the business which are integrated with the dealers, customers and suppliers and discussed with our third-party vendors, such as Amazon and Microsoft, how to extend support in ensuring the applications are up and running and in secure fashion," Avvaru says in an interview with Information Security Media Group. In addition to enhancing security for business-critical applications accessible in the cloud, for accessing legacy applications housed at a data center, the company has deployed personal firewalls, a VPN along with remote desktop protocols and data leak prevention tools, he explains.


According to Microsoft, Fabrikam called in Microsoft's Cybersecurity Solutions Group's Detection and Response Team (DART) eight days after the employee had opened the phishing email, by which time its computers and critical systems were failing and its network bandwidth had been completely overrun by Emotet. The malware used the victim's compromised computers to launch a distributed denial of service (DDoS) and overwhelm its network. "The virus threatened all of Fabrikam's systems, even its 185-surveillance camera network. Its finance department couldn't complete any external banking transactions, and partner organizations couldn't access any databases controlled by Fabrikam. It was chaos," Microsoft's DART team writes. "They couldn't tell whether an external cyberattack from a hacker caused the shutdown or if they were dealing with an internal virus," it explains further. "It would have helped if they could have even accessed their network accounts. Emotet consumed the network's bandwidth until using it for anything became practically impossible. Even emails couldn't wriggle through."


CSO Pandemic Impact Survey

As of March 23rd, that number had climbed to 77.7%, an increase of 4.7-fold. Notable was high tech firms grew which grew from 31.9%, to 90.2%. While 81% expressed confidence that their existing security infrastructure could handle their employees working from home, 61% were more concerned about security risks targeting WFH employees today than they were three months ago. ... Despite the high levels of confidence that their security infrastructures are up to the task at hand, 22% of organizations have found themselves out shopping for new security solutions/services to address the new work dynamic. Businesses least likely to be investing in new technology or services came from the same industries that identified as most prepared: financial services (12%) and healthcare (14%). Only 7% of SMB organizations (fewer than 1,000 employees) indicated that they had to make security purchases in response to the current conditions, which may indicate either a lack of visibility into their risk environments, a lack of available budget to support new investments, or a combination of both.


young man on video conference coronavirus remote communication telecommuting by gcshutter getty ima
If your company strongly encourages workers to stay home in response to the virus a significant portion of your company might be working from home for extended periods of time. From a data-protection standpoint; this significantly increases the chances that important intellectual property will be created outside of your data center. If your company currently relies on storing such data on file servers or similar systems, remote employees will probably not be able to use such systems easily. As a result, they will create and store important data directly on their laptops, leaving centralized company storage out of the picture. This means that you should probably examine your company's policy regarding data protection of laptops and mobile devices. Many companies don’t provide backup and recovery for mobile devices, despite the fact that most experts feel they should. Now might be a good time to do so. The main reason early attempts at laptop backup failed was users would kill the backup process because it slowed them down, and it cost too much. The good news is several providers can back up your laptops and mobile devices in such a way that users never realize backups are running.


AI needs to show return
One key driver of lack of return from AI is the simple failure to invest enough. Survey data suggest most companies don’t invest much yet, and I mentioned one above suggesting that investment levels have peaked in many large firms. And the issue is not just the level of investment, but also how the investments are being managed. Few companies are demanding ROI analysis both before and after implementation; they apparently view AI as experimental, even though the most common version of it (supervised machine learning) has been available for over fifty years. The same companies may not plan for increased investment at the deployment stage—typically one or two orders of magnitude more than a pilot—only focusing on pre-deployment AI applications. Of course, with any technology it can be difficult to attribute revenue or profit gains to the application. Smart companies seek intermediate measures of effectiveness, including user behavior changes, task performance, process changes, and so forth—that would precede improvements in financial outcomes. But it’s rare for these to be measured by companies either. Along with several other veterans of big data and AI, I am forming the Return on AI Institute, which will carry out programs of research and structured action, including surveys, case studies, workshops, methodologies, and guidelines for projects and programs.



Quote for the day:

"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell

Daily Tech Digest - March 05, 2020

CISO Imperatives in the Age of Digital Transformation

istock 1126779135
With proliferation of open source, enterprises need to secure not just commercial software, but also invest in securing open source software. Every member in a connected ecosystem from vendors, services providers, practitioners to end consumers, needs to be secure. Any weak link can put the entire ecosystem at risk. Open source usage is increasingly seen in categories like cloud management, security, analytics and storage, which have historically been dominated by proprietary products. Some of the key emerging open source technologies are open source firewall, instantaneous server-less workloads, trustworthy AI, blockchain, quantum computing, etc. Fueled by open methodologies and peer production, employees from enterprises are contributing to open source communities and collaborating better, thus forcing management to rethink their strategies. 5G next generation wireless technology will enable enhanced speed and performance, lower latency and better efficiency. It is expected to be broadly used for IoT communications and videos while controls/automation, fixed wireless access, high-performance edge analytics, and location tracking are the second tier uses for 5G-capable networks.



Verizon: Companies will sacrifice mobile security for profitability, convenience

mobile security / unlocked data connections
"For a number of reasons, mobile today is a smaller issue than many others," Zumerle said via email. "Among other factors, the operating system is more hardened, and mobile devices have less access to critical enterprise infrastructure and data." The Verizon report found that 39% of organizations admitted to suffering a security compromise involving a mobile device — up from 33% in the 2019 report and 27% in 2018. Of those that suffered a compromise, 66% said the impact was major and 36% said it had lasting repercussions. Twenty-percent of organizations that suffered a mobile compromise said a rogue or insecure Wi-Fi hotspot was involved. "Although the risks of public Wi-Fi are becoming well known, convenience trumps policy – even common sense — for many users. Some organizations are trying to prevent this by implementing Wi-Fi-specific policies, but inevitably, rules will be broken," Verizon said. According to MobileIron, 7% of protected devices detected a man-in-the-middle (MitM) attack in the past year.


Report: Most IoT transactions are not secure

Iot
Zscaler is a bit generous in what it defines as enterprise IoT devices, from devices such as data-collection terminals, digital signage media players, industrial control devices, medical devices, to decidedly non-business devices like digital home assistants, TV set-top boxes, IP cameras, smart home devices, smart TVs, smart watches and even automotive multimedia systems. “What this tells us is that employees inside the office might be checking their nanny cam over the corporate network. Or using their Apple Watch to look at email. Or working from home, connected to the enterprise network, and periodically checking the home security system or accessing media devices,” the company said in its report. Which is typical, to be honest, and let (s)he who is without sin cast the first stone in that regard. What’s troubling is that roughly 83% of IoT-based transactions are happening over plaintext channels, while only 17% are using SSL. The use of plaintext is risky, opening traffic to packet sniffing, eavesdropping, man-in-the-middle attacks and other exploits. And there are a lot of exploits.


Envision The Future To Unlock Business Value

While we were busy applying service packs and working out how to prevent “dumb users” from getting themselves into trouble at work, those same people were beginning to enjoy the spoils of the 21st century. Armed increasingly with high speed domestic and even mobile broadband, as well as a wide range of tactile consumer tech devices, they were gradually starting to enjoy a dizzying array of consumer services that were transforming their daily lives. From building stronger relationships with friends and family through social networking, through to the transformation in their retail and lifestyle habits, for the first time ever, normal, every day people (not just nerds like me and my colleagues) were beginning to enjoy the opportunity of a world where technology is something that lifts our capability, helping us to achieve more in all aspects of our lives. Slowly, the centre of gravity of people’s use of technology shifted from the world of work to their personal lives to the point where, certainly by the end of the last decade, most people had access to better technology in their domestic lives than they did at work.


5 big microservices pitfalls to avoid during migration


Rushing into microservices adoption is one of the most common mistakes software teams make. Even though microservices provide a chance to deploy new applications and updates quickly, the distributed architecture's inherent complexity means it's not ideal for certain types of organizations or applications. Teams should review the state of their existing development culture to see if management skills are in place. They should also examine existing applications to determine whether they are suitable and ready for a migration to microservices. Agile and DevOps principles should be in place, as microservices tend not to play well with a Waterfall development approach. Teams also need diligent training and access to documentation before they begin a migration of monolith-based workloads. Performance issues soon arise when a microservices migration starts without a proper plan and appropriate infrastructure investments in place. Teams can mitigate these issues if they ensure services are strictly independent from each other but can still communicate normally, as is the target for a loosely coupled architecture.


AI, Azure and the future of healthcare with Dr. Peter Lee

What’s interesting about AI for Health is that it’s the first pillar in the AI for Good program that actually overlaps with a business at Microsoft and that’s Microsoft Healthcare. One way that I think about it is, it’s an outlet for researchers to think about, what could AI do to advance medicine? When you talk to a lot of researchers in computer science departments, or across Microsoft research labs, increasingly you’ll see more and more of them getting interested in healthcare and medicine and the first things that they tend to think about, if they’re new to the field, are diagnostic and therapeutic applications. Can we come up with something that will detect ovarian cancer earlier? Can we come up with new imaging techniques that will help radiologists do a better job? Those sorts of diagnostic and therapeutic applications, I think, are incredibly important for the world, but they are not Microsoft businesses. So the AI for Health program can provide an outlet for those types of research passions. And then there are also, as a secondary element, four billion people on this planet today that have no reasonable access to healthcare.


Why Unsupervised Machine Learning is the Future of Cybersecurity


There are two types of Unsupervised Learning: discriminative models and generative models. Discriminative models are only capable of telling you, if you give it X then the consequence is Y. Whereas the generative model can tell you the total probability that you’re going to see X and Y at the same time. So the difference is as follows: the discriminative model assigns labels to inputs, and has no predictive capability. If you gave it a different X that it has never seen before it can’t tell what the Y is going to be because it simply hasn’t learned that. With generative models, once you set it up and find the baseline you can give it any input and ask it for an answer. Thus, it has predictive ability – for example it can generate a possible network behavior that has never been seen before. So let’s say some person sends a 30 megabyte file at noon, what is the probability that he would do that? If you asked a discriminative model whether this is normal, it would check to see if the person had ever sent such a file at noon before… but only specifically at noon. Whereas a generative model would look at the context of the situation and check if they had ever sent a file like that at 11:59 a.m. and 12:30 p.m. too, and base its conclusions off of surrounding circumstances in order to be more accurate with its predictions.


Advanced Tech Needs More Ethical Consideration & Security

The recent confrontation between the US and Iran is a case in point. Threats of cyber warfare along with conventional military action put security executives at every major organization on high alert and questioning what to do in the event of a breach. There are worries of vulnerabilities to the infrastructure and that attackers could be impossible to identify. Very few organizations are fully prepared to respond to an incident at an enterprise or organizational level. An effective response to a major cyber incident requires current, effective IT-focused cyber plans, but also participation from all lines of business and operational support areas to ensure a successful integrated, orchestrated recovery. The benefits of advanced technologies to industry and commerce are manifold. In healthcare, robotic surgeries improve recovery rates and reduce days spent in the hospital. AI and machine learning boost productivity in the data-dependent financial services industry, increasing analytical efficiency while reducing manual work and human errors. The same goes for most industries. 


Internet of think with padlock showing security
IoT-specific regulations aren’t the only ones that can have an impact on the marketplace. Depending on the type of information a given device handles, it could be subject to the growing list of data-privacy laws being implemented around the world, most notably Europe’s General Data Protection Regulation, as well as industry-specific regulations in the U.S. and elsewhere. The U.S. Food and Drug Administration, noted Maxim, has been particularly active in trying to address device-security flaws. For example, last year it issued security warnings about 11 vulnerabilities that could compromise medical IoT devices that had been discovered by IoT security vendor Armis. In other cases it issued fines against healthcare providers. But there’s a broader issue with devising definitive regulation for IoT devices in general, as opposed to prescriptive ones that simply urge manufacturers to adopt best practices, he said. Particular companies might have integrated security frameworks covering their vertically integrated products – such as an industrial IoT company providing security across factory floor sensors – but that kind of security is incomplete in the multi-vendor world of IoT.



Intel CSME bug is worse than previously thought

Intel CPU
At the time, the CVE-2019-0090 vulnerability was only described as a firmware bug that allowed an attacker with physical access to the CPU to escalate privileges and execute code from within the CSME. Other Intel technologies, like Intel TXE (Trusted Execution Engine) and SPS (Server Platform Services), were also listed as impacted. But in new research published today, Ermolov says the bug can be exploited to recover the Chipset Key, which is the root cryptographic key that can grant an attacker access to everything on a device. Furthermore, Ermolov says that this bug can also be exploited via "local access" -- by malware on a device, and not necessarily by having physical access to a system. The malware will need to have OS-level (root privileges) or BIOS-level code execution access, but this type of malware has been seen before and is likely not a hurdle for determined and skilled attackers that are smart enough to know to target the CSME.



Quote for the day:


"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray


Daily Tech Digest - January 15, 2020


Microsoft said that it had not seen the vulnerability exploited in any active attacks, likely the reason the company classified the security patch as "Important" rather than as "Critical." The vulnerability came to light when it was discovered by the National Security Agency. In its advisory, the NSA referred to the bug as severe, saying that sophisticated cyber actors would understand the flaw very quickly, thus making the affected versions of Windows fundamentally vulnerable. The agency said it recommends that all January 2020 Patch Tuesday patches be installed as soon as possible to fix the vulnerability on all Windows 10 and Windows Server 2016/2019 systems. "The consequences of not patching the vulnerability are severe and widespread," the NSA said. "Remote exploitation tools will likely be made quickly and widely available. Rapid adoption of the patch is the only known mitigation at this time and should be the primary focus for all network owners." After finding and researching the flaw, the NSA reported it directly to Microsoft, which then took the quick step of investigating it and issuing the patch.


Researchers found that 48% of consumers are more sensitive to anti-fraud measures that disrupt their online experience than they were a year ago. This means that retailers and restaurants have an increased imperative to balance fraud mitigation and customer experience. Yet, only 64% of organizations’ customers have confidence in the security of their digital channels. In this era of high customer expectations, increasing digital fraud risk, and competition to continuously innovate, businesses must address this critical interconnection. “Opportunities for fraud increase as businesses adopt new features, such as voice ordering or mobile wallets. Businesses do this to engage their customers and provide an enhanced customer experience,” said Rich Stuppy, Chief Customer Officer at Kount. “Unfortunately, these businesses are not adopting the proper controls related to fraud. This report underscores the fact that digital innovation and the corresponding increases in revenue in these industries will never reach their full potential without integrating suitable fraud prevention initiatives.”


network variables + dynamics / digital transformation
Today’s networks need to be highly agile so changes can be propagated across the network in near real-time, enabling it to keep up with the demands of the business. Network agility comes from having centralized control where configuration changes can be made once and propagated across the network instantly. Ideally, network changes could be coordinated with application changes so the lagging performance doesn’t slow the business down. Achieving a higher level of agility will likely require a refresh of the infrastructure if the network is more than five years old, and that means adopting SDN. Traditional infrastructure had an integrated control and data plane, so changes had to be made on a box-by-box basis. This is why networks took so long to configure and lacked agility. With an SDN model, the control plane is separated from the data plane, centralizing control so network engineers define a change and push it out across the entire network at once. Older equipment isn’t designed to be software-first, so look for infrastructure that is built on a modernized operating system like Linux and that can be programmed using current languages such as Python and Ruby.



Bendable glass is the holy grail of foldable phone design. So far, plastic screens have been more prone to damage from casual scrapes than hard glass. Without a protective material, the phone's internal workings are susceptible to breaking from pressure, water, dust and sharp objects. Samsung bore the brunt of this reality when its Galaxy Fold sustained several types of screen damage before the phone officially went on sale.  With their high prices and untested designs, foldable phones are a tough sell as is. A strong cover material to protect against drops and scratches could help shift foldable phones from expensive curiosities to serious products that could one day replace your traditional shingle-shaped phone. Gorilla Glass-maker Corning showed CNET glass that's thin enough to fold without breaking, but it's still in development and isn't commercially available. If it were, we'd see a lot more foldable phones today. Without a ready supply of glass thin enough to fold in half and strong enough not to crack, splinter or break, device-makers have had to choose whether to wait for a new material or work with what they have.


Google 2020 Worry
For a long time, with all due respect to the Jeeves and other assorted yahoos of the world, Google's position as the gatekeeper to the world's information has seemed untouchable. But guess what? Amazon is little by little breaking through that barrier and — on some level, at least — threatening to make Google far less relevant than it is today. Consider: A forecast assembled by eMarketer suggests that Amazon will be the sole company to increase its revenue related to U.S. search advertising over the coming two years. Amazon, the organization believes, will jump up to represent nearly 16% of money earned from search-related advertising in America — up from about 13% in 2019 — while Google will fall from 73% in 2019 to 70.5% in 2021. Already, Amazon's ad business is believed to have grown by somewhere in the ballpark of 50% from the end of 2018 to the end of 2019, according to AdWeek, and prices for advertising on Amazon have reportedly gone up by 200% over the past couple years — all while prices for advertising on Google have remained relatively constant.



BullSequana XH2000 is expected to run weather predictions faster and better than its predecessor. Florence Rabier, director general at ECMWF said: "We will now be able to run higher resolution forecasts in under an hour, meaning better information will be shared with our member states even faster." Atos's technology will also help to improve the ECMWF's "ensemble prediction" system (EPS). The program, introduced in 1992, is a way to gauge how accurate a specific weather forecast is. Instead of delivering only one forecast, the EPS produces 51 predictions, which all include slight variations in the initial weather conditions. In other words, the system gives users a range of possible scenarios, as well as the likelihood of their occurrence. For example, the program could provide a government with an estimate of the likelihood of severe flooding in certain parts of the country. Currently, the EPS's 15-day forecasts have a resolution of 18km; but with BullSequana, the ECMWF is hoping that it can run the system at a resolution of 10km.




Although the number of impacted organisations remains low, such attacks – exemplified by the ongoing Travelex crisis, the October 2019 ransoming of shipping services firm Pitney Bowes, and various attacks on public sector bodies – are more severe and usually carefully chosen, as the organised gangs behind them are looking to extort the maximum possible sum of money. Other growth areas in 2019 included Magecart infections against e-commerce websites, which hit hundreds of victims, and attacks conducted through the cloud. Check Point revealed that while 90% of enterprises now use cloud services, 67% of security teams feel they do not have proper visibility into their infrastructure. As a result, the magnitude of cloud-related attacks and breaches was up substantially, with misconfiguration of cloud resources the biggest cause. 


Ethical AI, in simple words, is about ensuring your AI models are fair, ethical, and unbiased. So how does bias get into the model? Let’s assume you are building an AI model that provides salary suggestions for new hires. As part of building the model, you have taken gender as one of the features to suggest salary. The model is trying to discriminate salary based on gender. In the past, this bias went through human judgments and various social and economic factors, but if you include this bias as part of the new model, it's a recipe for disaster. The whole idea is to build a model that is not biased and suggests salary based on people's experiences and merits. Take another example of an application providing restaurant recommendations to a user and allowing a user to book a table. The AI application is designed to look at the amount spent in previous transactions and ratings of restaurants (along with other features), and the AI system starts recommending restaurants that are more expensive.


laptop / networked binary data flows / world map
Teleportation involves the moving of information instantaneously and securely. In the “Star Trek” series, fictional people move immediately from one place to another via teleportation. In the University of Bristol experiment, data is passed instantly via a single quantum state across two chips using light particles, or photons. Importantly, each of the two chips knows the characteristics of the other, because they’re entangled through quantum physics, meaning they therefore share a single physics-based state. The researchers involved in these successful silicon tests said they built the photon-based silicon chips in a lab and then used them to encode the quantum information in single particles. It was “a high-quality entanglement link across two chips, where photons on either chip share a single quantum state,” said Dan Llewellyn of University of Bristol in a press release. Entanglement links to be used in data transmission are where information is conjoined, or entangled, so that the start of a link has the same state as the end of a link. The particles, and thus data, are at the beginning of the link and at the end of the link at the same time.


Many frameworks for implementing user interfaces (Angular2, Vue, React, etc.) make use of callback procedures, or event handlers, which, as a result of an event, directly perform the corresponding action. Deciding which action to perform (be it input validation, local state update, error handling, or data fetching) often means accessing and updating some pieces of state which are not always in scope. Frameworks thus include some state management or communication capabilities to handle delivering state where it is relevant and needed, and updating it when allowed and required. Component-based user interface implementations generally feature pieces of state, and actions scattered along the component tree in non-obvious ways. For instance, a todo list application may be written as <TodoList items><TodoItem></TodoList>. Assuming a TodoItem manages its deletion, it has to communicate the deletion up the hierarchy for the parent TodoList to be called with the updated item list.



Quote for the day:



"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls


Daily Tech Digest - December 01, 2019

Data Scientists: Machine Learning Skills are Key to Future Jobs


SlashData queried some 20,500 respondents from 167 countries, which means this is a pretty comprehensive survey from a global perspective. Responses were additionally weighted in order to “derive a representative distribution for platforms, segments, and types of IoT [projects],” according to the report accompanying the data. According to the survey, some 45 percent of developers want to either learn or improve their existing data science/machine learning skills. This outpaces the desire to learn UI design (33 percent of respondents), cloud native development such as containers (25 percent), project management (24 percent), and DevOps (23 percent). “The analysis of very large datasets is now made possible and, more importantly, affordable to most due to the emergence of cloud computing, open-source data science frameworks and Machine Learning as a Service (MLaaS) platforms,” the report added. “As a result, the interest of the developer community in the field is growing steadily.”



Did You Forget the Ops in DevOps?


This person with deep operational knowledge was "too busy" fighting fires in production environments, and had not been included in the devops transformation conversations for this large organization. He worked for a different legal entity in a different building, despite being part of the same group, and he was about to leave due to lack of motivation. Yet the organization was claiming to do "devops". The action we took in this case was to take offline a number of experts who were effectively bottlenecks to the flow of work (if you’ve read the book "The Phoenix Project" you will recognize the "Brent" character here). We asked them to build the new components they needed with infrastructure-as-code under a Scrum approach. We even took them to a different city so they wouldn't get disturbed by their regular coworkers. After a couple of months, they rejoined their previous teams but now had a totally new approach of working. Even the oldest Unix sysadmin had now become an agile evangelist that preached infrastructure as code rather than manually hot fixing production.


Is your approach to enterprise architecture relevant in today’s world?

Is your approach to enterprise architecture relevant in today’s world?
In today’s fast-changing market, the role of enterprise architecture is more important than ever to prevent organisations from creating barriers to future change or expensive technical debt. To remain relevant, modern enterprise architecture approaches must be customer experience (CX)-driven, agile, and deliver the right level of detail just in time for when it needs to be consumed. Static business capabilities are no longer the only anchor point for architecting enterprise technology environments. CX is now a dominant driver of strategy and so businesses need to understand how stakeholders (customers, employees, partners, etc.) consume services and how they can be enabled by technology and platforms. The importance of capturing, managing, analysing and exposing data grows each year. Therefore, enterprise architecture needs to reinvent itself again to incorporate the needs of a rapidly evolving digital world. In a CX-driven planning approach, customer journeys are used to define the services and channels of engagement.


Edge Computing – Key Drivers and Benefits for Smart Manufacturing

Edge Computing – Key Drivers and Benefits for Smart Manufacturing
Edge computing means faster response times, increased reliability and security. A lot has been said about how the Internet of Things ( IoT ) is revolutionizing the manufacturing world. Many studies have already predicted more than 50 billion devices will be connected by 2020. It is also expected over 1.44 billion data points will be collected per plant per day. This data will be aggregated, sanitized, processed, and used for critical business decisions. This means unprecedented demand and expectations on connectivity, computational power, and speed of quality of service. Can we afford any latency in critical operations such as operator hand trapped in a rotor, fire situation, or gas leakage? This is the biggest driver for edge computing. More power closer to the data source-the “Thing” in IoT. Rather than a conventional central controlling system, this distributed control architecture is gaining popularity as an alternative to the light version of data center and where control functions are placed closer to the devices.


63% Of Executives Say AI Leads To Increased Revenues And 44% Report Reduced Costs

745,705 autonomous-ready vehicles will ship worldwide in 2023 according to Gartner
The McKinsey global survey found a nearly 25% year-over-year increase in the use of AI in standard business processes, with a sizable jump from the past year in companies using AI across multiple areas of their business; 58% of executives surveyed report that their organizations have embedded at least one AI capability into a process or product in at least one function or business unit, up from 47% in 2018; retail has seen the largest increase in AI use, with 60% of respondents saying their companies have embedded at least one AI capability in one or more functions or business units, a 35-percentage point increase from 2018; 74% of respondents whose companies have adopted or plan to adopt AI say their organizations will increase their AI investment in the next three years; 41% say their organizations comprehensively identify and prioritize their AI risks, citing most often cybersecurity and regulatory compliance. 84% of C-suite executives believe they must leverage AI to achieve their growth objectives, yet 76% report they struggle with how to scale AI;


How Europe’s AI ecosystem could catch up with China and the U.S.

McKinsey senior
Europe edges out the U.S. in total number of software developers (5.7 million to 4.4 million), and venture capital spending in Europe continues to rise to historically high levels. Even so, the U.S. and China beat Europe in venture capital spending, startup growth, and R&D spending. The U.S. also outpaces Europe in AI, big data, and quantum computing patents. A Center for Data Innovation study released last month also concluded that the U.S. is in the lead, followed by China, with Europe lagging behind. Multiple surveys of business executives have found that businesses around the world are struggling to scale the use of AI, but European firms trail major U.S. companies in this metric too, with the exception of smart robotics companies. This trend could be in part due to lower levels of data digitization, Bughin said. About 3-4% of businesses surveyed by McKinsey were found to be using AI at scale. The majority of those are digital native companies, he said, but 38% of major companies in the U.S. are digital natives compared to 24% in Europe.


Singapore government must realise human error also a security breach

Singapore must be tougher on firms that treat security as value-add service
More importantly, before dismissing man-made mistakes as "not a security risk", organisations such as the SAC need to consider the stats. "Inadvertent" breaches brought about by human error and system glitches accounted for 49% of data breaches, according to an IBM Security report conducted by Ponemon Institute, which estimated that human errors alone cost companies $3.5 million. In fact, cybersecurity vendor Kaspersky described employees as a major hole in an organisation's fight against cyber attacks. Some 52% viewed their staff as the biggest weakness in IT security, where their careless actions put the company's security strategy at risk. It added that 47% of businesses were concerned most about employees sharing inappropriate data via mobile devices, while careless or uninformed staff were the second-most likely cause of a serious security breach--second only to malware. Some 46% of cybersecurity incidents in the past year were attributed to careless or uninformed staff. Kaspersky further described human error on the part of staff as the "attack vector" that businesses were falling victim to.


6-essential practices to successfully implement machine learning solutions


Here’s a golden rule to remember: a machine learning algorithm is only as good as the data it’s fed. So, to use machine learning effectively, you must have the right data for the problem you’re trying to solve. And not just a few data points. Machines need a lot of data to learn — think hundreds of thousands of data points. Your data will need to be formatted, cleaned, and organized for your algorithm, and you will need two datasets: one to train the model and one to evaluate its performance. So after picking up the use cases, filter out the ones where there is data available and the ones that can quickly generate value across the board. Go for multiple smaller wins and have a clear data strategy. ... With a worldwide shortage of trained data scientists, you need to empower your data analytics professionals and other domain information experts with the tools and support they need to become citizen data scientists.


The hardest part of AI & analytics is not AI - it’s data management

The hardest part of AI & analytics is not AI, it’s data management image
“This is going to enable organisations to train their AI and ML algorithms with a more complete, more comprehensive and less biased sets of data.” According to Hanson, this can be done by using good data engineering tools with AI built-in. “What we actually need is not just artificial intelligence in the analytics layer — in terms of generating graphical views of data and making decisions in real-time around data — we need to make sure that we’ve got artificial intelligence in the backend to ensure we’ve got well-curated data going into our analytics engines.” He warned that if organisations fail to do this, they won’t see the benefit of analytical AI going forward. “In my opinion, a lot of mistakes could be made, some serious mistakes, if we don’t make sure that we train our analytical AI with high quality, well-curated data,” said Hanson. He added, if the data sets aren’t good, then AI advocates in organisations are not going to get the results they expect. This could hinder any future investment in the technology.


How to Advance Your Enterprise Risk Management Maturity

close up of bottom of a skateboarder's sneaker, in the middle of pushing skateboard forward
Before you can determine whether you want to advance your ERM maturity, you must first define your appetite for risk to make a proper assessment. Not all companies require the same level of risk maturity. In fact, the highest level of maturity does not necessarily equal the best ERM program. Rather than immediately aiming for the highest level of maturity, companies need to take a step back and identify their priorities to understand what is best for their organization’s specific circumstances. ... Effective risk culture is one that empowers business functions to be intellectually honest about the risks they face and encourages them to align risks with strategic objectives. To accomplish this, companies must remain patient. Changing a culture of any sized organization takes time and is not something that can be done by any single meeting or memo to the staff. It takes time to educate team members properly and for leaders to demonstrate the importance of the change. ... Once you determine who should hold primary responsibility for the risk management program and have received the necessary buy-in, you will need to measure your progress towards greater ERM maturity. One way to measure progress is to compare yourself to your peers.



Quote for the day:


"The science of today is the technology of tomorrow." -- Edward Teller