Daily Tech Digest - July 24, 2023

A CDO Call To Action: Stop Hoarding Data—Save The Planet

Many of the implications of rampant data hoarding are reasonably well known—including the potential compliance and privacy risks associated with storing petabytes of data you know absolutely nothing about. For many companies, "don’t ask, don’t tell" seems to be the approach when it comes to ensuring compliant management of their dark data—or at the very least, ignoring dark data represents a business risk many compliance officers seem willing to take. Other implications on the cost of storing dark data, or the potential value that could be unlocked by operationalizing it, are also often discussed. For many CDOs, the motivation to store troves of data that may never get used is a form of FOMO, where the fear of being unable to support a future request for new analytical insights outweighs the cost of data storage. In these situations, the unwillingness of many CDOs to apply methods to measure the business value of data is a primary enabler of data hoarding, where the idea that "we might need it someday" is sufficient to drive millions in annual revenues for cloud service providers.


Google, Microsoft, Amazon, Meta Pledge to Make AI Safer and More Secure

Meta said it welcomed the White House agreement. Earlier this week, the company launched the second generation of its AI large language model, Llama 2, making it free and open source. "As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society," said Nick Clegg, Meta's president of global affairs. The White House agreement will "create a foundation to help ensure the promise of AI stays ahead of its risks," Brad Smith, Microsoft vice chair and president, said in a blog post. Microsoft is a partner on Meta's Llama 2. It also launched AI-powered Bing search earlier this year that makes use of ChatGPT and is bringing more and more AI tools to Microsoft 365 and its Edge browser. The agreement with the White House is part of OpenAI's "ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance," said Anna Makanju, OpenAI vice president of global affairs.


UN Security Council discusses AI risks, need for ethical regulation

During the meeting, members stressed the need to establish an ethical, responsible framework for international AI governance. The UK and the US have already started to outline their position on AI regulation, while at least one arrest has occurred in China this year after the Chinese government enforced new laws relating to the technology. Malta is the only current non-permanent council member that is also an EU member state and would therefore be governed by the bloc’s AI Act, the draft of which was confirmed in a vote last month. Although AI can bring huge benefits, it also poses threats to peace, security and global stability due to its potential for misuse and its unpredictability — two essential qualities of AI systems, Clark said in comments published by the council after the meeting. “We cannot leave the development of artificial intelligence solely to private-sector actors,” he said, adding that without investment and regulation from governments, the international community runs the risk of handing over the future to a narrow set of private-sector players.


How to Choose Carbon Credits That Actually Cut Emissions

Risk is the biggest driver in business and — with trillions of dollars in annual climate-related costs and damage – the climate crisis is fast becoming a business crisis. Corporations must act now to minimize losses, illustrate meaningful climate action to shareholders and comply with fast-approaching climate regulations. Carbon credits are an important approach to scaling climate action globally and are a fast-growing strategy for delivering on corporate ESG goals. While these offsets are part of nearly every scenario that keeps global warming to 1.5 degrees Celsius, legacy carbon markets lack broad public trust: Impactful carbon solutions require clear guidelines and proven, verifiable data. ... This is an all-hands-on-deck moment. We must engage proven, reliable, and equitable methods to meet what may be the greatest threat to the future of humanity and the planet we inhabit. Carbon credits, when implemented responsibly and at scale, can be a very effective tool for humanity to use in the fight to limit the damages from climate change. 


BGP Software Vulnerabilities Overlooked in Networking Infrastructure

At the heart of the vulnerabilities was message parsing. Typically, one would expect a protocol to check that a user is authorized to send a message before processing the message. FRRouting did the reverse, parsing before verifying. So if an attacker could have spoofed or otherwise compromised a trusted BGP peer's IP address, they could have executed a denial-of-service (DoS) attack, sending malformed packets in order to render the victim unresponsive for an indefinite amount of time. ... "Originally, BGP was only used for large-scale routing — Internet service providers, Internet exchange points, things like that," dos Santos says. "But especially in the last decade, with the massive growth of data centers, BGP is also being used by organizations to do their own internal routing, simply because of the scale that has been reached," to coordinate VPNs across multiple sites or data centers, for example. More than 317,000 Internet hosts have BGP enabled, most of them concentrated in China (around 92,000) and the US (around 57,000). Just under 2,000 run FRRouting — though not all, necessarily, with BGP enabled — and only around 630 respond to malformed BGP OPEN messages.


How IT leaders are driving new revenue

CIOs who are driving new revenue are: Delivering technologies designed to meet specific business outcomes. For example, Narayaran has seen CIOs focus their teams on creating applications designed not merely on high availability and reliability but on hitting very specific business goals — such as enabling on-time deliveries to its customers. Unlocking data’s potential. Narayaran says he has also seen CIOs make big plays with their data programs, investing in the technology infrastructure needed to bring together and analyze data sets to create new services or products and drive business objectives such as improved customer retention and customer stickiness. Co-creating with their business unit colleagues. Notably, Narayaran says CIOs are approaching their business unit colleagues with such proposals. “CIOs [are saying], ‘Here’s an opportunity. We have this data, and we can make this data do this for you,’ and they then bring that to life. And if they say, ‘This is what we have and this is what we can do,’ then the business, too, can come up with new ideas.”


Data centers grapple with staffing shortages, pressure to reduce energy use

A consistent issue for the past decade, attracting and retaining new talent will continue to challenge data center leaders, according to Uptime. About two-thirds of survey respondents said they have “problems recruiting or retaining staff,” but the trend appears to be stabilizing as it hasn’t increased over last year’s data. According to the report: More than one third (35%) of respondents say their staff is being hired away, which is more than double the 2018 figure of 17%. And many believe operators are poaching from within the sector, with 22% of respondents reporting that they lost staff to their competitors. Staffing challenges are highest among operations management staff and those specializing in mechanical and electrical trades, as well as with junior level staff. “It's been challenging the data center industry for about a decade. It has been escalating in recent years. Our survey data this year suggests that it may, at least this year, not be getting worse, maybe stabilizing. And poaching is a problem of people who do get qualified applicants into jobs – they do find them hired away,” said Jacqueline Davis, research analyst at Uptime Institute.


Why API attacks are increasing and how to avoid them

First, exposing APIs to network requests significantly increases the attack surface, says Johannes Ullrich, dean of research at the SANS Technology Institute. “An attack no longer needs access to the local system but can attack the API remotely,” he says. Even worse, APIs are designed to be easy to find and use, Ullrich says. They’re “self-documenting” and are typically based on common standards. That makes them convenient for developers, but also prime targets for hackers. Since APIs are designed to help applications talk to one another, they often have access to core company data, such as financial information or transaction records. It’s not only the data itself that’s at risk. The API documentation can also give outsider insights into business logic, says Ullrich. “This insight may make finding weaknesses in the business process easier.” Then there’s the quantity issue. Companies deploying cloud-based applications no longer deploy a single monolithic application with a single access point in and out. 


Journey to Quantum Supremacy: First Steps Toward Realizing Mechanical Qubits

Research and development in this field are growing at astonishing paces to see which system or platform outruns the other. To mention a few, platforms as diverse as superconducting Josephson junctions, trapped ions, topological qubits, ultra-cold neutral atoms, or even diamond vacancies constitute the zoo of possibilities to make qubits. So far, only a handful of qubit platforms have demonstrated to have the potential for quantum computing, marking the checklist of high-fidelity controlled gates, easy qubit-qubit coupling, and good isolation from the environment, which means sufficiently long-lived coherence. ... The realization of a mechanical qubit is possible if the quantized energy levels of a resonator are not evenly spaced. The challenge is to keep the nonlinear effects big enough in the quantum regime, where the oscillator’s zero point displacement is minuscule. If this is achieved, then the system may be used as a qubit by manipulating it between the two lowest quantum levels without driving it in higher energy states.


ChatGPT Amplifies IoT, Edge Security Threats

ChatGPT and its ilk are rapidly appearing integrated or embedded in commercial and consumer IoT of all types. Many imagine AI models to be the most sophisticated security threat to date. But most of what is imagined is indeed imaginary. “Now, if an actual AI emerges, be very worried if the kill switch is very far away from humans,” says Jayendra Pathak, chief scientist at SecureIQLab. He, like others in security and AI, agree that the chances of an actual general artificial intelligence developing any time soon are still very low. But as to the latest AI sensation, ChatGPT, well that’s another kind of scare. “ChatGPT poses [insider] threats -- similar to the way rogue or ‘all-knowing employees’ pose -- to IoT. Some of the consumer IoT vulnerabilities pose the same risk as a microcontroller or microprocessor does,” Pathak says. In essence, ChatGPT’s potential threats spring from its training to be helpful and useful. Such a rosy prime directive can be very harmful, however. 



Quote for the day:

"No man can stand on top because he is put there." -- H. H. Vreeland

Daily Tech Digest - July 23, 2023

Sustainable Computing - With An Eye On The Cloud

There are two parts to sustainable goals: 1. How do cloud service providers make their data centers more sustainable?; 2. What practices can cloud service customers practice to better align with the cloud and make their workloads more sustainable? Let us first look at the question of how businesses should be planning for sustainability. How should they bake in sustainability aspects as part of their migration to the cloud? The first aspect to consider, of course, is choosing the right cloud service provider. It is essential to select a carbon-thoughtful provider based on its commitment to sustainability as well as how it plans, builds, powers, operates, and eventually retires its physical data centers. The next aspect to consider is the process of migrating services to an infrastructure-as-a-service deployment model. Organizations should carry out such migrations without re-engineering for the cloud, as this can help to drastically reduce energy and carbon emissions as compared to doing so through an on-premise data center. 


The Intersection of AI and Data Stewardship: A New Era in Data Management

In addition to improving data quality, AI can also play a crucial role in enhancing data security and privacy. With the increasing number of data breaches and growing concerns around data privacy, organizations must ensure that their data is protected from unauthorized access and misuse. AI can help organizations identify potential security risks and vulnerabilities in their data infrastructure and implement appropriate measures to safeguard their data. Furthermore, AI can assist in ensuring compliance with various data protection regulations, such as the General Data Protection Regulation (GDPR), by automating the process of identifying and managing sensitive data. Another area where AI and data stewardship intersect is in data governance. Data governance refers to the set of processes, policies, and standards that organizations use to ensure the proper management of their data assets. AI can help organizations establish and maintain robust data governance practices by automating the process of creating, updating, and enforcing data policies and rules. 


Saga Pattern With NServiceBus in C#

In its simplest form, a saga is a sequence of local transactions. Each transaction updates data within a single service, and each service publishes an event to trigger the next transaction in the saga. If any transaction fails, the saga executes compensating transactions to undo the impact of the failed transaction. The Saga Pattern is ideal for long-running, distributed transactions where each step needs to be reliable and reversible. It allows us to maintain data consistency across services without the need for distributed locks or two-phase commit protocols, which can add significant complexity and performance overhead. ... The Saga Pattern is a powerful tool in our distributed systems toolbox, allowing us to manage complex business transactions in a reliable, scalable, and maintainable way. Additionally, when we merge the Saga Pattern with the Event Sourcing Pattern, we significantly enhance traceability by constructing a comprehensive sequence of events that can be analyzed to comprehend the transaction flow in-depth.


Efficiency and sustainability in legacy data centers

A recent analyst report found a “wave of technological trends” is driving change throughout the data center sector at an unprecedented pace, with “rapidly diversifying business applications generating terabytes of data.” All that data has to go somewhere, and as hyperscale cloud providers push some of their workloads away from large, CapEx-intensive centralized hubs into Tier II and Tier III colocation markets — it’s looking like colos may be in greater demand than ever before. However, these circumstances pose a serious challenge for the colocation sector, as “the resulting workloads have exploded onto legacy data center infrastructures”, many of which may be “ill-equipped to handle them.” Now, the colocation market finds itself caught between two conflicting macroeconomic forces. On one hand, the growth in demand puts greater pressure on operators in Tier II and III markets to build more facilities, faster, to accommodate larger and more complex workloads; on the other, the existential need to reduce carbon emissions and slash energy consumption is vital.


A quantum computer that can’t be simulated on classical hardware could be a reality next year

The current-generation machines are still very much in the noisy era of quantum computing, Ilyas Khan, who founded Cambridge Quantum out of the University of Cambridge in 2014 and now works as chief product officer, told Tech Monitor that we’re moving into the “mid-stage NISQ” where the machines are still noisy but we’re seeing signs of logical qubits and utility. Thanks to error correction, detection and mitigation techniques, even on noisy error-prone qubits, many companies have been able to produce usable outcomes. But at this stage, the structure and performance of the quantum circuits could still be simulated using classical hardware. That will change next year, says Khan. “We think it’s important for quantum computers to be useful in real-life problem solving,” he says. “Our current system model, H2, has 32 qubits in its current instantiation, all to all connected with mid-circuit measurement.”


Protecting your business through test automation

The inadequate pre-launch testing forces teams to then scramble post-launch to fix faulty software applications with renewed urgency, with the added pressure of managing the potential loss of revenue and damaged brand reputation caused by the defect. When the faulty software reaches end users, dissatisfied customers are a problem that could have far longer-reaching effects as users pass on their negative experiences to others. The negative feedback could also prevent potential new customers from ever trying the software in the first place. So why is software not being tested properly? Changing customer behaviours in the financial services sector, as well as increased competition from digital-native fintech start-ups, have led many organisations to invest in a huge amount of digital transformation in recent years. With companies coming under more pressure than ever to respond to market demands and user experience trends through increasingly frequent software releases, the sheer volume of software needing testing has skyrocketed, placing a further burden on resources already stretched to breaking point.


Implementing zero trust with the Internet of Things (IoT)

There’s a strongly held view that it simply isn’t possible to trust any IoT device, even if it’s equipped with automatic security updating. “As a former CIO, my guidance is that preparation is the best defense,” Archundia tells ITPro. IoT devices are often just too much of a risk; they’re too much of a soft entry point into the organization to overlook them. It’s best to assume each device is a hole in an enterprise’s defenses. Perhaps each device won’t be a hole at all times, but some may be for at least some of the times. So long as the hole isn’t plugged, it can be found and exploited. That’s actually fine in a zero trust environment, because it assumes every single act, by a human or a device, could be malicious. ... “Because zero trust focuses on continuously verifying and placing security as close to each asset as possible, a cyber attack need not have far-reaching consequences in the organization,” he says. “By relying on techniques such as secured zones, the organization can effectively limit the blast radius of an attack, ensuring that a successful attack will have limited benefits for the threat agent.”


US Data Privacy Relationship Status: It’s Complicated

The American Data Privacy and Protection Act (ADPPA) is a bill that if passed would become the first set of federal privacy regulations that would supersede state laws. While it passed a House of Representatives commerce committee vote by a 53-2 margin in July 2022, the bill is still waiting on a full House vote and then a Senate vote. In the US, 10 states have enacted comprehensive privacy laws, including California, Colorado, Connecticut, Indiana, Iowa, Montana, Tennessee, Texas, Utah, and Virginia. More than a dozen other states have proposed bills in various states of activity. The absence of an overarching federal law means companies must pick and choose based on where they happen to be doing business. Some businesses opt to start with the most stringent law and model their own data privacy standards accordingly. The current global standard for privacy is Europe’s 2018 General Data Protection Regulation (GDPR) and has become the model for other data privacy proposals. Since many large US companies do business globally, they are very familiar with GDPR. 


KillNet DDoS Attacks Further Moscow's Psychological Agenda

Mandiant's assessment of the 500 DDoS attacks launched by KillNet and associated groups from Jan. 1 through June 20 offers further evidence that the collective isn't some grassroots assembly of independent, patriotic hackers. "KillNet's targeting has consistently aligned with established and emerging Russian geopolitical priorities, which suggests that at least part of the influence component of this hacktivist activity is intended to directly promote Russia's interests within perceived adversary nations vis-a-vis the invasion of Ukraine," Mandiant said. Researchers said KillNet and its affiliates often attack technology, social media and transportation firms, as well as NATO. ... To hear KillNet's recounting of its attacks via its Telegram channel, these hacktivists are nothing short of devastating. The same goes for other past and present members of the KillNet collective, including KillMilk, Tesla Botnet, Anonymous Russia and Zarya. Recent attacks by Anonymous Sudan have involved paid cloud infrastructure and had a greater impact, although it's unclear if this will become the norm.


Agile vs. Waterfall: Choosing the Right Project Methodology

Choosing the right project management methodology lays the foundation for effective planning, collaboration, and delivery. Failure to select the appropriate methodology can lead to many challenges and setbacks that can hinder project progress and ultimately impact overall success. Let's delve into why it's crucial to choose the right project management methodology and explore in-depth what can go wrong if an unsuitable methodology is employed. ... The right methodology enables effective resource allocation and utilization. Projects require a myriad of resources, including human, financial, and technological. If you select an inappropriate methodology, you can experience inefficient resource management, causing budget overruns, underutilization of skills, and time delays. For instance, an Agile methodology that relies heavily on frequent collaboration and iterative development may not be suitable for projects with limited resources and a hierarchical team structure.



Quote for the day:

"People leave companies for two reasons. One, they don't feel appreciated. And two, they don't get along with their boss." -- Adam Bryant

Daily Tech Digest - July 22, 2023

All-In-One Data Fabrics Knocking on the Lakehouse Door

The fact IBM, HPE, and Microsoft made such similar data fabric and lakehouse announcements indicate there is strong market demand, Patel says. But it’s also partly a result of the evolution of data architecture and usage patterns, he says. “I think there are probably some large enterprises that decide, listen, I can’t do this anymore. You need to go and fix this. I need you to do this,” he says. “But there’s also some level of just where we’re going…We were always going to be in a position where governance and security and all of those types of things just become more and more important and more and more intertwined into what we do on a daily basis. So it doesn’t surprise me that some of these things are starting to evolve.” While some organizations still see value in choosing the best-of-breed products in every category that makes up the data fabric, many will gladly give up having the latest, greatest feature in one particular area in exchange for having a whole data fabric they can move into and be productive from day one.


Shift Left With DAST: Dynamic Testing in the CI/CD Pipeline

The integration of DAST in the early stages of development is crucial for several reasons. First, by conducting dynamic security testing from the onset, teams can identify vulnerabilities earlier, making them easier and less costly to fix. This proactive approach helps to prevent security issues from becoming ingrained in the code, which can lead to significant problems down the line. Second, early integration of DAST encourages a security-focused mindset from the beginning of the project, promoting a culture of security within the team. This cultural shift is crucial in today’s cybersecurity climate, where threats are increasingly sophisticated, and the stakes are higher than ever. DAST doesn’t replace other testing methods; rather, it complements them. By combining these methods, teams can achieve a more comprehensive view of their application’s security. In a shift left approach, this combination of testing methods can be very powerful. By conducting these tests early and often, teams can ensure that both the external and internal aspects of their application are secure. This layered approach to security testing can help to catch any vulnerabilities that might otherwise slip through the cracks.


First known open-source software attacks on banking sector could kickstart long-running trend

In the first attack detailed by Checkmarx, which occurred on 5 April and 7 April, a threat actor leveraged the NPM platform to upload packages that contained a preinstall script that executed its objective upon installation. To appear more credible, the attacker created a spoofed LinkedIn profile page of someone posing as an employee of the victim bank. Researchers originally thought this may have been linked to legitimate penetration testing services commissioned by the bank, but the bank revealed that to not be the case and that it was unaware of the LinkedIn activity. The attack itself was modeled on a multi-stage approach which began with running a script to identify the victim’s operating system – Windows, Linux, or macOS. Once identified, the script then decoded the relevant encrypted files in the NPM package which then downloaded a second-stage payload. Checkmarx said that the Linux-specific encrypted file was not flagged as malicious by online virus scanner VirusTotal, allowing the attacker to “maintain a covert presence on the Linux systems” and increase its chances of success.


From data warehouse to data fabric: the evolution of data architecture

By introducing domain‑oriented data ownership, domain teams become accountable for their data and products, improving data quality and governance. Traditional data lakes often encounter challenges related to scalability and performance when handling large volumes of data. However, data mesh architecture solves these scalability issues through its decentralized and self‑serve data infrastructure. With each domain having the autonomy to choose the technologies and tools that best suits their needs, data mesh allows teams to scale their data storage and processing systems independently. ... Data Fabric is an integrated data architecture that is adaptive, flexible, and secure. It is an architectural approach and technology framework that addresses data lake challenges by providing a unified and integrated view of data across various sources. Data Fabric allows faster and more efficient access to data by extracting the technological complexities involved in data integration, transformation, and movement so that anybody can use it.


What Is the Role of Software Architect in an Agile World?

It has become evident that there is a gap between the architecture team and those who interact with the application on a daily basis. Even in the context of the microservice architecture, failing to adhere to best practices can result in a tangled mess that may force a return to monolithic structures, as we have seen with Amazon Web Services. I believe that it is necessary to shift architecture left and provide architects with better tools to proactively identify architecture drift and technical debt buildup, injecting architectural considerations into the feature backlog. With few tools to understand the architecture or identify the architecture drift, the role of the architect has become a topic of extensive discussion. Should every developer be responsible for architecture? Most companies have an architect who sets standards, goals, and plans. However, this high-level role in a highly complex and very detailed software project will often become detached from the day-to-day reality of the development process. 


Rapid growth without the risk

The case for legacy modernization should today be clear: technical debt is like a black hole, sucking up an organization’s time and resources, preventing it from developing the capabilities needed to evolve and adapt to drive growth. But while legacy systems can limit and inhibit business growth, from large-scale disruption to subtle but long-term stagnation, changing them doesn’t have to be a painful process of “rip-and-replace.” In fact, rather than changing everything only to change nothing, an effective program enacts change in people, processes and technology incrementally. It focuses on those areas that will make the biggest impact and drive the most value, making change manageable in the short term yet substantial in its effect on an organization's future success and sustainable in the long term. In an era where executives often find themselves in FOMU (fear of messing up) mode, they would be wise to focus on those areas of legacy modernization that will make the biggest impact and drive the most value, making change manageable in the short term yet substantial in its effect on an organization’s future success.


Data Fabric: How to Architect Your Next-Generation Data Management

The data fabric encompasses a broader concept that goes beyond standalone solutions such as data virtualization. Rather, the architectural approach of a data fabric integrates multiple data management capabilities into a unified framework. The data fabric is an emerging data management architecture that provides a net that is cast to stitch together multiple heterogeneous data sources and types through automated data pipelines. ... For business teams, a data fabric empowers nontechnical users to easily discover, access, and share the data they need to perform everyday tasks. It also bridges the gap between data and business teams by including subject matter experts in the creation of data products. ... Implementing an efficient data fabric architecture is not accomplished with a single tool. Rather, it incorporates a variety of technology components such as data integration, data catalog, data curation, metadata analysis, and augmented data orchestration. Working together, these components deliver agile and consistent data integration capabilities across a variety of endpoints throughout hybrid and multicloud environments.


Data Lineage Tools: An Overview

Modern data lineage tools have evolved to meet the needs of organizations that handle large volumes of data. These tools provide a comprehensive view of the journey of data from its source to its destination, including all transformations and processing steps along the way. They enable organizations to trace data back to its origins, identify any changes made along the way, and ensure compliance with regulatory requirements. One key feature of modern lineage tools is their ability to automatically capture and track metadata across multiple systems and platforms. This capability removes the need for manual, time-consuming documentation. Another important aspect of modern data lineage tools is their integration with other technologies such as metadata management systems, Data Governance platforms, and business intelligence solutions. This enables organizations to create a unified view of their data landscape and make informed decisions based on accurate, up-to-date information.


The Impact of AI Data Lakes on Data Governance and Security

One of the primary concerns with AI data lakes is the potential for data silos to emerge. Data silos occur when data is stored in separate repositories or systems that are not connected or integrated with one another. This can lead to a lack of visibility and control over the data, making it difficult for organizations to enforce data governance policies and ensure data security. To mitigate this risk, organizations must implement robust data integration and management solutions that enable them to maintain a comprehensive view of their data landscape and ensure that data is consistently and accurately shared across systems. Another challenge associated with AI data lakes is the need to maintain data quality and integrity. As data is ingested into the data lake from various sources, it is essential to ensure that it is accurate, complete, and consistent. Poor data quality can lead to inaccurate insights and decision-making, as well as increased security risks. 


AppSec Consolidation for Developers: Why You Should Care

Complicated and messy AppSec programs are yielding a three-fold problem: unquantifiable or unknowable levels of risk for the organization, ineffective resource management and excessive complexity. This combined effect leaves enterprises with a fragmented picture of total risk and little useful information to help them strengthen their security posture. ... An increase in the number of security tools leads to an increase in the number of security tests, which in turn translates to an increase in the number of results. This creates a vicious cycle that adds complexity to the AppSec environment that is both unnecessary and avoidable. Most of the time, these results are stored in their respective point tools. As a result, developers frequently receive duplicate issues as well as remediation guidance that is ineffective or lacking context, causing them to waste critical time and resources. Without consolidated and actionable outcomes, it is impossible to avoid duplication of findings and remediation actions.



Quote for the day:

"There is no substitute for knowledge." -- W. Edwards Deming

Daily Tech Digest - July 21, 2023

Attackers find new ways to deliver DDoSes with “alarming” sophistication

The newer methods attempt to do two things: (1) conceal the maliciousness of the traffic so defenders don’t block it and (2) deliver ever-larger traffic floods that can overwhelm targets even when they have DDoS mitigations in place. ... Another method on the rise is the exploitation of servers running unpatched software for the Mitel MiCollab and MiVoice Business Express collaboration systems, which act as a gateway for transferring PBX phone communications to the Internet and vice versa. A vulnerability tracked as CVE-2022-26143 stems from an unauthenticated UDP port the unpatched software exposes to the public Internet. By flooding a vulnerable system with requests that appear to come from the victim, the system in turn pummels the victim with a payload that can be 4 billion times bigger. This amplification method works by issuing what’s called a “startblast” debugging command, which simulates a flurry of calls to test systems. “As a result, for each test call, two UDP packets are sent to the issuer, enabling an attacker to direct this traffic to any IP and port number to amplify a DDoS attack,” the Cloudflare researchers wrote.


Overcoming user resistance to passwordless authentication

A passwordless platform can replace these siloed mechanisms with a single experience that encompasses both biometric-based identity verification and authentication. During initial on-boarding, the system validates the integrity of the device, captures biometric data (selfie, live selfie, fingerprint, etc.) and can even verify government documents (driver’s license, passport, etc.), which creates a private, reusable digital wallet that is stored in the device TPM / secure enclave. ... For legacy systems that an organization can’t or won’t migrate to passwordless, some passwordless platforms use facial matching to reset or change passwords. This eliminates the friction associated with legacy password reset tools that are often targeted by cybercriminals. Some passwordless authentication platforms even support offline access when internet access is not available or during a server outage. They can also replace physical access tokens – such as building access cards – by allowing users to authenticate via the same digital wallet that provides access to the IT network.


Apple eyes a late arrival to the generative AI party

Privacy isn’t just an advantage in consumer makets; it also matters within the enterprise. Anxious to protect company data, major enterprises including Apple, Samsung, and others have banned employees from using ChatGPT or GitHub Copilot internally. The desire to use these tools exists, but not at the cost of enterprise privacy. Within the context of Apple’s growing status in enterprise IT, the eventual introduction of LLM services that can deliver powerful results while also having privacy protection built in means the company will be able to provide tools enterprise employees might be permitted to use. Not only this, but those tools could end up displaying a degree of personal contextual relevance that isn’t available elsewhere — without sharing key personal data with others. So, there’s a lot of optimism; it is, after all, not the first time Apple has appeared to be late to a party and then delivered a better experience than available elsewhere. This optimism was reflected swiftly by investors. While warning that the next iPhone may not ship until October, Bank of America raised its Apple target to $210 per share from $190


Why — and how — high-performance computing technology is coming to your data center

Not long ago, conventional thinking was that high-performance computing was only required for exceptionally data-intensive applications within select industries — aerospace, oil and gas, and pharmaceuticals, for example, in addition to supercomputing centers dedicated to solving large, complex problems. This is no longer the case. As data volumes have exploded, many organizations are tapping into these technology and techniques to perform essential functions. In a relatively short timeframe, they’ve gone from believing they would never need anything beyond routine compute performance capabilities, to depending on high-performance computing to fuel their business success. ... In conjunction with AI and data analytics, high-performance computing is powering entire industries that depend for their existence on performing large-scale, mathematically intensive computations for a variety of needs, including faster business insights and results to drive improved decision-making.


Backup in the age of cloud

While it originated at a time when 30GB hard drives and CD backups were prevalent, it has adapted to the present era of 18TB drives and widespread cloud storage. The strategy's simplicity and effectiveness in safeguarding valuable information, Sia says, has contributed to its popularity among data protection experts. Many enterprises today have embraced the 3-2-1 concept, with primary backups stored in a datacentre for quick recovery, and a second copy kept on a different infrastructure to avoid a single point of failure, says Daniel Tan, head of solution engineering for ASEAN, Japan, Korea and Greater China at Commvault. “In addition, the same data could be uploaded to an offsite cloud on a regular basis as the third online copy, which can be switched offline if required, to provide an air gap that effectively protects data from being destroyed, accessed, or manipulated in the event of a cyber security attack or system failure.” Indeed, the cloud, with its geographical and zone redundancy, flexibility, ease of use, and scalability, is an increasingly important part of an organisation’s 3-2-1 backup strategy, which remains relevant today


Megatrend alert: The rise of ubiquitous computing

First, I get that cloud computing is also ubiquitous in architecture. However, we use these resources as if they are centrally located, at least virtually. Moving to a more ubiquitous model means we can leverage any connected platform at any time for any purpose. This means processing and storage occur across public clouds, your desktop computer, smartwatch, phone, or car. You get the idea—anything that has a processor and/or storage. With a common abstracted platform, we push applications and data out on an abstracted space, and it finds the best and most optimized platform to run on or across platforms as distributed applications. For instance, we develop an application, design a database on a public cloud platform, and push it to production. The application and the data set are then pushed out to the best and most optimized set of platforms. This could be the cloud, your desk computer, your car, or whatever, depending on what the application does and needs. Of course, this is not revolutionary; we’ve been building complex distributed systems for years. 


MIT Makes Probability-Based Computing a Bit Brighter

At the heart of the team’s p-bit is a component called an optical parametric oscillator (OPO), which is essentially a pair of mirrors that bounce light back and forth between them. The light does not travel in a physical vacuum, however, in the same sense that outer space is a vacuum. “We do not actually pump a vacuum,” Roques-Carmes says. “In principle...it’s in the dark. We’re not sending in any light. And so that’s what we call the vacuum state in optics. There’s just no photon, on average, in the cavity.” When a laser is pumped into the cavity, the light oscillates at a specific frequency. But each time the device is powered up, the phase of the oscillation can take on one of two states. Which state it settles on depends on quantum phenomena known as vacuum fluctuations, which are inherently random. This quantum effect is behind such well-observed phenomena as the Lamb shift of atomic spectra and the Casimir and van der Waals forces found in nanosystems and molecules, respectively. OPOs have previously been used to generate random numbers, but for the first time the MIT team showed they could exert some control over the randomness of the output.


5 ways CIOs can help eliminate a culture of busyness

As leaders, it’s crucial to prioritize outcomes achieved, especially in the world of hybrid and remote work, adds Constantinides. “Rather than fixating on the process, we should concentrate on the results,” she says. “An outcome-based model provides employees with the confidence and autonomy to excel in their work.” For her, this entails establishing clear expectations and objectives, communicating them effectively, empowering teams with accountability, measuring outcomes, and offering clear feedback. I don’t think this is only a CIO issue; it’s a leadership issue, says Thaver. In many business environments, perceptions of busyness have existed for years. Eliminating these ideas demands that leaders push a culture of learning, unlearning and relearning so an environment is created where it’s possible, and encouraged, for people to change bad habits. According to Naren Gangavarapu, CIO at the Northern Beaches Council, CIOs must partner with the leadership and other important business stakeholders to manage expectations and make sure that outcomes are the most important metric for success.


Sophisticated HTTP and DNS DDoS attacks on the rise

The internet’s domain name system (DNS) that is responsible for translating domain names into IP addresses has also been a frequent target for DDoS attacks. In fact, over the last quarter over 32% of all DDoS attacks observed and mitigated by Cloudflare were over the DNS protocol. There are two types of DNS servers: authoritative DNS servers that hold the collection of records for a domain name and all its subdomains (known as a DNS zone) and recursive DNS resolvers, which take DNS queries from end-users, look up which is the authoritative server for the requested domain, query it and return the response back to the requesting user. To make this process more efficient, DNS resolvers cache the records they obtain from authoritative servers for a period of time, so they don’t have to query the authoritative servers too often for the same information. The time before cached records expire is configurable and admins must strike a balance, because a long expiry time means the DNS resolver might end up with outdated information about record changes made on the authoritative server negatively impacting the experience for users that rely on it.


How Will the New National Cybersecurity Strategy Be Implemented?

The National Cybersecurity Strategy is buttressed by five pillars. The first focuses on defending critical infrastructure. Increasing public-private collaboration is a big part of this strategic pillar. Joshua Corman, vice president of cyber safety strategy at Claroty and former CISA chief strategist, notes that this push is being met with pushback in some cases. “After a decade plus of largely voluntary practices, like the NIST CSF [National Institute of Standards and Technology Cybersecurity Framework], some sectors are unhappy with the … more muscular rebalancing of public good and increased use of regulation,” he explains. Yet, the value of collaboration among federal agencies, the private sector, and international partners is clear. “This can lead to information sharing, knowledge exchange, and coordinated efforts to combat cyber threats effectively,” says Nicole Montgomery, cyber operations lead at IT service management company Accenture Federal Services. Jeff Williams, co-founder and CTO of Contrast Security, points out that this implementation plan represents a more proactive approach to cybersecurity.



Quote for the day:

If you're not prepared to be wrong, you'll never come up with anything original.” -- Sir Ken Robinson

Daily Tech Digest - July 20, 2023

DSPM: Control Your Data to Prevent Issues Later

Simply put, it’s becoming increasingly hard to prevent data security breaches and hacks — the attack surfaces have become too complex. Today, there are petabytes of data being stored, but only a small percentage is actually used and touched on a regular basis. Once the data is stored, it flows seemingly to everyone, and before long, no one knows what data is stored where and who has access to it. Data has become prevalent, especially with the increase in the number of cloud and SaaS applications. All employees, not only engineers, generate and transmit data, sometimes sensitive PII data that is subject to regulations like GDPR and HIPAA. Of course, companies attempt to maintain good data hygiene with risk assessments, labeling, written policies and procedures (that no employee actually reads). All of this is largely done manually and adds more work on IT teams that are already drowning in security and risk assessments as well as security alerts. Add to that the fact that manual assessments are unsustainable and are out of date the second they are completed because they are point-in-time and don’t capture any changes.


Cracking the code: solving for 3 key challenges in generative AI

People are really afraid of machines replacing humans. And their concerns are valid, considering the human-like nature of AI tools and systems like GPT. But machines aren’t going to replace humans. Humans with machines will replace humans without machines. Think of AI as a co-pilot. It’s the user’s responsibility to keep the co-pilot in check and know its powers and limitations. Shankar Arumugavelu, SVP and Global CIO at Verizon, says we should start by educating our teams. He calls it an AI literacy campaign. “We’ve been spending time internally within the company on raising the awareness of what generative AI is, and also drawing a distinction between traditional ML and generative AI. There is a risk if we don’t clarify machine learning, deep learning, and generative AI – plus when you would use one versus the other.” Then the question is: What more can you do if something previously took you two weeks and now it takes you two hours? Some leaders will get super efficient and talk about reducing headcount and the like. Others will think, I’ve got all these people, what can I do with them?


Training AI Models – Just Because It’s ‘Your’ Data Doesn’t Mean You Can Use It

The rise of generative AI has inspired many companies to leverage the data and content they have amassed over the years, to train AI models. It is important that these companies ensure they have the right to use this data and content for this purpose. The lessons from Everalbum are worth heeding. However, the FTC is not the only threat to companies training AI models. Class action attorneys are circling the waters and smell blood. At least one recent class action suit has been filed based on the use of images uploaded by users to train AI models, arguably without the proper consent to do so. ... The foregoing cases primarily address situations where companies used data they already had to train AI models, at least arguably without consent to do so. Many companies are newly collecting data and content from various sources to build databases upon which they can train AI models. In these cases, it is important to ensure that data is properly acquired and that its use to train models is permitted. This too has led to lawsuits and more will likely be filed.


How Platform Engineering Bridges the IT and DevOps Divide

Platforming engineering and “platform as a product” have been key to the PaaS ecosystem for years but are now gaining fresh traction in the industry. In Puppet’s State of DevOps Report, 51% of respondents said they had already adopted platform engineering and 93% said it was a step in the right direction. Gartner predicted 80% of software engineering organizations will have platform teams by 2026. The concept can be defined in several ways. Gartner reported platform engineering is “an emerging trend intended to modernize enterprise software delivery… designed to support the needs of software developers and others by providing common, reusable tools and capabilities, and interfacing to complex infrastructure.” PlatformEngineering.org’s recent blog post defines it as the discipline of designing and building toolchains and workflows for self-service capabilities in software engineering organizations during the cloud-native era. Regardless of definition, platform engineering is the latest iteration of IT centralization, though now attempting to retain all the benefits of distributed team empowerment through “composition” rather than converged control.


Wi-Fi 7: Everything you need to know about the next era of wireless networking

With each iteration of Wi-Fi standards, channel widths have widened to allow for more simultaneous data transfer streams. It's intended to enable multiple devices to communicate, but increasing the channel width doesn't necessarily equate to faster speeds. There are often benefits to sticking with lower channels around 20 - 40MHz, but Wi-Fi 7 jumps to 320MHz for its 6GHz band. Wi-Fi 6E already uses a 6GHz band but is limited to 160MHz, so doubling the channel width is a big selling point for the upcoming standard. As with most technical advancements, real-world performance upgrades will rely on whether your devices are efficiently designed to support the maximum theoretical speeds of Wi-Fi 7.
... MU-MIMO (multi-user, multiple input, multiple output) increases to 16 streams for Wi-Fi 7 alongside the wider channel, doubling the bandwidth from the 8 streams of Wi-Fi 6. The more antennas on your router, internal or external, the better equipped it will be to handle the maximum theoretical bandwidth limits. 


ChatGPT and Digital Trust: Navigating the Future of Information Security

As we navigate this monumental shift, the focus on information security and safeguarding against risks becomes paramount, particularly in the realm of AI. This is where the fascinating and complex issue of digital trust comes into play. Amidst recent news stories of data breaches and privacy concerns, the importance of digital trust and robust information security have never been more critical. ... In the age of AI, maintaining trust in our digital world is an ongoing process that requires constant attention and adaptation. It involves asking tough questions, making complex decisions and collaborating as a tech community. As we continue to integrate AI technologies like ChatGPT into our digital landscape, let’s focus on building a strong foundation of trust that promotes innovation while prioritizing the safety and well-being of everyone involved. As professionals in the technology field, it’s our responsibility to understand, adapt and innovate in a responsible and ethical manner. Let’s keep exploring, questioning and learning because that’s what the journey of technology is all about, especially when it comes to reinforcing information security.


Gartner: Generative AI not yet influencing IT spending, but enterprises should plan for it

“The generative AI frenzy shows no signs of abating,” said Frances Karamouzis, distinguished VP analyst at Gartner, in a statement. “Organizations are scrambling to determine how much cash to pour into generative AI solutions, which products are worth the investment, when to get started and how to mitigate the risks that come with this emerging technology.” That same poll found that 68% of executives believe the benefits of generative AI outweigh the risks, compared with just 5% that feel the risks outweigh the benefits. “Initial enthusiasm for a new technology can give way to more rigorous analysis of risks and implementation challenges,” Karamouzis stated. “Organizations will likely encounter a host of trust, risk, security, privacy and ethical questions as they start to develop and deploy generative AI.” Another survey, this one published by MIT Technology Review Insights and sponsored by enterprise data management company Databricks, polled 600 senior data and technology executives.


IDEA: a Framework for Nurturing a Culture of Continuous Experimentation

Empathy and trust goes a long way when building relationships. If the team is expected to pick up new skills, they need to have dedicated and uninterrupted time to practice and learn. As a team, you can timebox the uninterrupted time you need. However, expecting your team to pick up new skills while they’re also expected to work full-time on their current projects will end up in disappointment and burnout. Another important factor is that people adopt new skills differently. Some people learn better in groups and some alone. I always respect individual preferences. However, having a couple hours of workshops for the whole team often benefits everyone. During these workshops everyone can discuss their learning, questions, and interesting facts they found out. From my experience as a consultant, I often find myself stepping into the unknown with new clients and projects. This has taught me that openness, honesty and curiosity are fundamental. 


Study: We Are Wasting Up to 20 Percent of Our Time on Computer Problems

Surprisingly, studies reveal that a significant amount of our time spent on computers, averaging between 11 and 20 percent, is wasted due to malfunctioning systems or complex interfaces that hinder our ability to accomplish desired tasks. Professor Kasper Hornbæk, one of the researchers involved in the study, deems this situation far from satisfactory. “It’s astonishing how high this percentage is. Almost everyone has experienced the frustration of a critical PowerPoint presentation not being saved or a system crashing at a crucial moment. While it is widely recognized that creating IT systems that align with users’ needs is challenging, the occurrence of such issues should be much lower. This highlights the insufficient involvement of ordinary users during the development of these systems,” Professor Hornbæk asserts. Professor Morten Hertzum, the other researcher contributing to the study, emphasizes that the majority of frustrations stem from the performance of everyday tasks, rather than complex endeavors.


Mitigating the organisational risks of generative AI

Firstly, keeping an eye on how their systems are being used, by rolling up topics, attacks and other exploits to understand the moving threat landscape will be key — along with keeping warning thresholds low for anomalous events. Ensuring all AI-augmented platforms and services have a dedicated ‘kill switch’ with the ability to revoke keys and other methods of access will become ever more vital as we advance to peak GenAI. ... It’s often a great yardstick of how a service, function or platform is performing in the market, so keeping a watch on service and keywords after a big product launch is always a good idea — especially when it comes to picking up any AI responses that break ethics or are reputationally damaging. Providing access to the latest AI-related news on the underlying technologies they’re using for any engineering teams is another preventative measure you can put in place. This will support in the battle to quickly spot any upstream problems, allowing engineers to proactively restrict affected services as required.



Quote for the day:

“If we wait until we’re ready, we’ll be waiting for the rest of our lives.” -- Lemony Snicket

Daily Tech Digest - July 19, 2023

This is why personal encryption is vital to the future of business

We already recognize that humans are the weakest link in any security infrastructure. But what isn’t sufficiently recognized is that any action that puts those humans more at risk makes anyone they work for more vulnerable. A well-resourced attacker will simply identify who works at the company they're aiming for and then find ways to compromise some of those individuals using seemingly unrelated tricks. That compromised data will then feed into more sophisticated attacks against the actual target. So, what makes it easy to create those customized attacks in the first place? Information about those people, what they enjoy, who they know, where they go, and how they flow. That’s precisely the kind of data any weakening in end-to-end encryption for individuals makes easier to get. Because if you weaken personal data protection in one place, you might as well weaken it in every place. And once you do that, you’re presenting hackers and attackers with a totally tempting table of attack surface treats to chow down on. This is not clever, nor is it sensible.


Data protection and AI - accountability and governance

Part of risk remediation will include having policies and procedures in place that ensure operational staff have sufficient direction as to their roles and responsibilities. These should be readily available and supported by training. Risk management policies will need to be implemented or existing policies updated to address AI-specific considerations. For example regarding obtaining and handling AI training and test data, procuring and assessing external software, allocating roles and responsibilities for validation and independent sign-off of AI system development, deployment and updates (which may also include a role for an ethics committee) as well as ensuring policies relevant to automated decision making that address risks of bias, prejudice or lack of interpretability. ... The UK GDPR requires controllers to be transparent with individuals about how their personal data will be collected and processed within AI systems, including by telling them how and why such data will be processed and in explaining any decisions made with AI, how long any personal data will be retained and who it will be shared with. For further information about transparency in AI systems see here.


E-Waste: Australia’s Hidden ESG Nightmare

For Australian enterprises, e-waste is an IT life-cycle challenge, as much as an environmental one. With an increasingly decentralized workforce, IT teams are struggling to keep up with patch maintenance as well as the provisioning and deployment of new devices in such a way that it doesn’t disrupt operations. Consequently, these organizations are prone to create unnecessary e-waste through their poor processes, which can incur several consequences for a business. ... It remains true that managing e-waste at scale can be a logistical challenge for organizations. The best solution would be for IT teams to work with their suppliers and partners to establish a cyclical logistics chain, where older equipment is automatically fed back to the vendor and added to their e-waste management programs using the same logistics that deliver new technology. With the right partners and suppliers, which can offer reliable data-wiping services, the IT team will be able to manage the challenges of e-waste management in Australia. Largely due to these risk factors, the costs of poorly managing e-waste is likely to accelerate rapidly in the months ahead.


The draft data privacy law surprises with its simplicity

For the most part, the draft Digital Personal Data Protection Bill was pretty much what we had been promised—simple, principles-based and generally appropriate for our current stage of maturity. Most businesses I spoke with confirmed that, if passed as is, they would have no problem complying with the obligations it imposed after a reasonably short transition period. To be clear, there were things we would have liked to see changed—clauses that needed to be tweaked and others I would have liked removed. I had an opportunity to engage in the consultations that followed and found the government not just willing to hear our points of view, but keen to understand what impact the text of the draft would have on implementation of the law. In a truly democratic process, it is impossible for everyone’s suggestions to be incorporated, especially when they come from different perspectives. I know that is probably the case for several of my suggestions, but I know that where there exists a multiplicity of views, it is only possible for one to be reflected. 
The question is how an enterprise can use its data to do more than just do cool things? Enterprises are considering how their data can help shareholders. Kobielus wrote TDWI’s Best Practices Report with an eye to determining the chief factors that contribute to data monetization success. He found what he calls “four strategies for data monetization.” “The first one may not, at first glance, sound like a key strategy for monetization of data at all, but it is. It is data democratization -- giving everybody in your organization access to the best data you have to support data-driven analytics,” such as performing queries and producing reports. Enterprises can see the payoff of data democratization in terms of qualitative factors (such as employees working smarter), but there are quantitative factors as well, such as making better business decisions that enable the organization to boost sales, hold on to customers, or upsell to existing customers “When we talk about data monetization, it's a maturity model, where you move from data democratization to operationalize data . 


Managing Human Risk Requires More Than Awareness Training

The first step in managing human risk is to conduct a risk assessment to identify the risk factors most critical to the organization. Sound familiar? To be successful, a risk analyst must assess the likelihood of a vulnerability being exploited and the impact that would occur because of the event. To find these threat sources, the security operations team should be engaged to uncover documentation regarding cyberincidents, threat intelligence and mitigation plans from past audits. The security operations team also tests users on the likelihood of penetration, for example, through phishing simulation exercises. Once an assessor has this information, they can build a risk register to prioritize the highest risk factors. Any educator knows that it is not possible to teach someone everything that they need to know and expect them to retain all the information. ... For example, employees in an organization should be made aware of the risk associated with phishing attacks or identity theft efforts that engage employees through attack vectors such as emails, texts or phone call


A quick intro to the MACH architecture strategy

At the very least, most software teams are likely putting one or more MACH elements to considerable use already. In that case, this evaluation will help reveal which of the four components your organization might be overlooking. For instance, if your organization is currently deploying microservices-based applications on individual servers, deploying those applications in containers across a cluster of servers would be one way to closer align with a MACH strategy. Another plausible scenario is that a software team already uses microservices and cloud-native hosting, but isn't yet managing APIs in a way that positions it at the center of application design plans and build processes. Adopting an API-first development strategy -- that is, one that places a priority on determining how APIs will behave and addresses specific business requirements before any actual coding starts -- would place that team one step closer to proper MACH adoption. However, for teams that are truly starting at square one, such as those still running a localized monolith, it often makes the most sense to start out with headless application design. 


Is PC-as-a-Service part of your hybrid work strategy?

Getting new PCs into the hands of employees and making sure they’re regularly refreshed is complex. The old models of centralized staging and warehousing can create delays and excess shipping costs in today’s hybrid workstyles. Moreover, IT teams struggle to find time to manage day-to-day PC lifecycle tasks. ... By taking this service-oriented approach to PC management, IT teams will spend less time managing and supporting devices, freeing up time to focus on projects that have a greater impact on the business. From a financial perspective, Dell APEX PCaaS flips the script of employee device purchasing from a fixed cost to a predictable, monthly expenditure. Payments that spread out over time—like leasing a car or subscribing to cable services—align with your experience of consuming cloud software while affording you flexibility in how you plan your budget and allocate people resources. With Dell APEX PCaaS you can help your overworked IT staff deploy, support, and manage PCs, reducing time to value and total cost of ownership while ensuring that employees remain productive.


Why and how CISOs should work with lawyers to address regulatory burdens

As the regulatory burden increases, organizations and CISOs are having to take ownership of cyber risk, but it needs to be seen through the lens of business risk, according to Kayne McGladrey, field CISO with Hyperproof. Cyber risk is no longer simply a technology risk. "The problem is, organizationally, companies have separated those two and have their business risk register and their cyber risk register, but that’s not the way the world works anymore," says McGladrey. He believes the Securities and Exchange Commission (SEC), the Federal Trade Commission, FTC and other regulators in the US are trying to promote collaboration among business leaders because cyber risks are functionally business risks. ... However, not all CISOs are naturally well versed in defining the business case of cyber risk, and McGladrey believes CISOs who are more adept at articulating the business value of doing cybersecurity will find it easier to achieve buy-in, while those with a more technical background that emphasize compliance over business risk may find it more difficult to get support and budget.


Stress Test: IT Leaders Strained by Talent Shortage, Tech Spend

George Jones, CISO at Critical Start, says a shortage of skilled professionals has led to delays in certain projects and increased workloads for existing team members. “To combat these delays, we have looked at upskilling current employees, brought in interns with specific skill sets, leveraged contract and freelance workers, and implemented knowledge-sharing to encourage cross-functional collaboration, empowering employees to learn from one another,” he says. He explains Critical Start employees have clearly defined roles and responsibilities that align with their team and organizational goals, and cross-functional collaboration is encouraged to leverage diverse perspectives and expertise. “Agile methodologies promote transparency, adaptability, and iterative progress and foster a culture of psychological safety where individuals feel comfortable sharing ideas, taking risks, and learning from failures,” he adds. Jones says to foster a culture of communication and collaboration, my teams meet regularly to share knowledge, project updates, and provide feedback on what is working and what isn’t.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney