Daily Tech Digest - April 12, 2025


Quote for the day:

"Good management is the art of making problems so interesting and their solutions so constructive that everyone wants to get to work and deal with them." -- Paul Hawken


Financial Fraud, With a Third-Party Twist, Dominates Cyber Claims

Data on the most significant threats and what technologies and processes can have the greatest preventative impact on those threats are extremely valuable, says Andrew Braunberg, principal analyst at business intelligence firm Omdia. "It's great data for the enterprise, no question about it — that kind of data is going to be more and more useful for folks," he says. "As insurers figure out how to collect more standardized data, and more comprehensive data, at a quicker cadence — that's good news." ... While most companies do not consider their cyber-insurance provider as a security adviser, they do make decisions based on the premiums presented to them, says Omdia's Braunberg. And many companies seem ready to rely on insurers more. "Nobody really thought of these guys as security advisors that they should really be turning to, but if that shift happens, then I think the question gets a lot more interesting," he says. "Companies may have these annual sit-downs with their insurers where you really walk through this data and decide what kind of investments to make — and that's a different world than the way most security investment decisions are done today." The fact that cyber insurers are moving into an advisory role may be good news, considering the US government's pullback from aiding enterprises with cybersecurity, says At-Bay's Tyra. 


How to Handle a Talented, Yet Quirky, IT Team Member

Balance respect for individuality with the needs of the team and organization. By valuing their quirks as part of their creative process, you'll foster a sense of belonging and loyalty, Honnenahalli says. "Clear boundaries and open communication will prevent potential misunderstandings, ensuring harmony within the team." ... Leaders should aim to channel quirkiness constructively rather than working to eliminate it. For instance, if a quirky habit is distracting or counterproductive, the team leader can guide the individual toward alternatives that achieve similar results without causing friction, Honnenahalli says. Avoid suppressing individuality unless it directly conflicts with professional responsibilities or team cohesion. Help the unconventional team member channel their quirks productively rather than trying to reduce them, Xu suggests. "This means offering support and guidance in ways that allow them to thrive within the structure of the team." Remember that quirks can often be a unique asset in problem-solving and innovation. ... In IT, where innovation thrives on diverse perspectives, quirky team members often deliver creative solutions and unconventional thinking, Honnenahalli says. "Leaders who manage such individuals effectively can cultivate a culture of innovation and inclusivity, boosting morale and productivity."


A Guide to Managing Machine Identities

Limited visibility into highly fragmented machine identities makes them difficult to manage and secure. According to CyberArk's 2024 Identity Security Threat Landscape Report - a global survey of 2,400 security decision-makers across 18 countries - 93% of organizations experienced two or more identity-related breaches in 2023. Machine identities are a frequent target, with previous CyberArk research indicating that two-thirds of organizations have access to sensitive data. A ransomware attack on a popular file transfer system last year exposed the sensitive information of approximately 60 million individuals and impacted more than 2,000 public and private sector organizations. ... To address the challenges associated with managing fragmented machine identities, CyberArk Secrets Hub and CyberArk Cloud Visibility can help standardize and automate operational processes. These tools provide better visibility into identities that require access and determine whether the request is legitimate. ... Organizations should identify and secure their machine identities across multiple on-premises and cloud environments, including those from different cloud service providers. The right governance tool can help organizations meet the unique needs of each platform, while also making it easier to maintain a unified approach to machine identity management.


7 strategic insights business and IT leaders need for AI transformation in 2025

AI innovation continues rapidly, but enterprises must distinguish between practical AI that delivers tangible ROI and aspirational solutions that lack immediate business value. Practical AI enhances agent productivity, reduces handle times, and personalizes customer interactions in ways that directly impact revenue and operational efficiency. Business leaders must challenge vendors to demonstrate clear business cases, ensuring AI investments align with specific organizational objectives rather than speculative, unproven technology. Also, every AI initiative must have a roadmap with clearly defined focus areas and milestones. ... Enterprises now generate vast amounts of interaction data, but the true competitive advantage sits with AI-powered analytics. Real-time sentiment analysis, predictive modeling, and conversational intelligence redefine how organizations measure and optimize performance across customer-facing and internal communications. Companies that harness these insights can proactively address customer needs, optimize workforce performance, and drive data-driven decision-making -- at scale. ... Automation is no longer just a convenience but a necessity for streamlining complex business processes and enhancing customer journeys.


Bryson Bort on Cyber Entrepreneurship and the Needed Focus on Critical Infrastructure

Most people only know industrial control systems as “Stuxnet” and, even then, with a limited idea of what exactly that means. These are the computers that run critical infrastructure, manufacturing plants, and dialysis machines in hospitals. A bad day with normal computers means ransomware where a business can’t run, espionage where a company loses valuable data, or a regular person getting scammed out of their bank account. All pretty bad, but at least everyone is still breathing. With ICS, a bad day can mean loss of life or limb and that’s just at the point of use. The downstream effects of water or electricity being disrupted sends us to the Stone Ages immediately and there is a direct correlation to loss of life in those scenarios. ... As an entrepreneur, it’s the same and the Law of N is the variable number of people that you can lead where you personally have a visible impact on their daily requirements. The second you hit N+1, it is another leader below you in the chain who now has that impact. In summary: 1) you can’t do it alone, being an individual contributor (no matter how talented) is never going to be as impactful as a squad/team; 2) the structure you build is going to dictate the success or failure of the execution of your ideas; and 3) you have leadership limits of what you can control.


Rethinking talent strategy: What happens when you merge performance with development

Often, performance and development live on different systems, with no unified view of progress, potential, or skill gaps. Without a continuous data loop, talent teams struggle to design meaningful interventions, and line managers lack the insight to support growth conversations effectively. The result? Employee development efforts become reactive, generic, and in many cases, ineffective. But the problem isn’t just technical. According to Mohit Sharma, CHRO at EKA Mobility, there’s a strategic imbalance in focus. “Performance management often prioritises business metrics—financials, customer outcomes, process efficiency—while people-related goals receive less attention,” he says. “This naturally sidelines employee development.” And when development is treated as an afterthought, Individual Development Plans (IDPs) become little more than checkboxes. “The IDP often runs as a standalone activity, disconnected from performance outcomes,” Sharma adds. “This fragmentation means development doesn’t feed into performance—and vice versa.” Moreover, most organisations struggle with systematic skill-gap identification. In fast-changing industries, capability needs evolve every quarter. 


How cybercriminals are using AI to power up ransomware attacks

Ransomware gangs are increasingly deploying AI across every stage of their operations, from initial research to payload deployment and negotiations. Smaller outfits can punch well above their weight in terms of scale and sophistication, while more established groups are transforming into fully automated extortion machines. As new gangs emerge, evolve and adapt to boost their chances of success, here we explore the AI-driven tactics that are reshaping ransomware as we know it. Cybercriminal groups will typically pursue the path of least resistance to making a profit. As such, most cases of malign AI have been lower hanging fruit focusing on automating existing processes. That said, there is also a significant risk of more tech-savvy groups using AI to enhance the effectiveness of the malware itself. Perhaps the most dangerous example is polymorphic ransomware, which uses AI to mutate its code in real time. Each time the malware infects a new system, it rewrites itself, making detection far more difficult as it evades antivirus and endpoint security looking for specific signatures. Self-learning capabilities and independent adaptability are drastically increasing the chances of ransomware reaching critical systems and propagating before it can be detected and shut down.


IBM Quantum CTO Says Codes And Commitment Are Critical For Hitting Quantum Roadmap Goals

The technique — called the Gross code — shrinks the number of physical qubits required to produce stable output, significantly easing the engineering burden, according to R&D World. “The Gross code bought us two really big things,” Oliver Dial, IBM Quantum’s chief technology officer, said in an interview with R&D World. “One is a 10-fold reduction in the number of physical qubits needed per logical qubit compared to typical surface code estimates.” ... IBM’s optimism is grounded not just in long-term error correction, but in near-term tactics like error mitigation, a strategy to extract meaningful results from today’s imperfect machines. These techniques offer a way to recover accurate answers from computers that commit errors, Dial told R&D World. He sees this as a bridge between today’s noisy intermediate-scale quantum (NISQ) machines and tomorrow’s fully fault-tolerant quantum computers. Competitors are also racing to prove real-world use cases. Google has published recent results in quantum error correction, while Quantinuum and JPMorgan Chase are exploring secure applications like random number generation, R&D World points out. IBM’s bet is that better codes, especially its low-density parity check (LDPC) approach refined through the Gross code, will accelerate real deployments.


Defining leadership through mentorship and a strong network

While it’s a challenge to schedule a time each month that works for everyone, she says, there’s a lot of value in them to build strong team camaraderie. It’s also helped everyone better understand diverse backgrounds, what everyone’s contributing, and how the team can lean into those strengths and overcome challenges. ... While she wasn’t sure how it would land, it grabbed the attention of the CIO, who had never seen this approach before, and opened the dialogue for Schulze to be a candidate. She decided to push past any insecurities or fears, and go for a position she didn’t necessarily feel totally qualified for, but ended up landing the job. Schulze knows not everyone feels comfortable stepping out of their comfort zone, but as a leader, she wants to set that example for her employees. She identifies opportunities for growth and advancement, regardless of background or experience, and helps them tap into their potential. She understands it’s difficult for women to break through the boys club mentality that can exist in tech, and the challenge to fight stereotypes around women in IT and STEM careers. In her own career, Schulze had to apply herself extra hard to prove her worth and value, even when she had the same answers as her male counterparts.


Cracking the Code on Cybersecurity ROI

Quantifying the total cost of cybersecurity investments — which have long been at the top of most companies' IT spending priorities — is easy enough. It entails adding up the cost of the hardware resources, software tools, and personnel (including both internal employees as well as any outsourced cybersecurity services) that an organization deploys to mitigate security risks. But determining how much value those investments yield is where things get tricky. This is primarily because, again, the goal of cybersecurity investments is to prevent breaches from occurring — and when no breach occurs, there is no quantifiable cost to measure. ... Rather than estimating breach frequency and cost based on historical data specific to your business, you could look at data about current cybersecurity trends for other companies similar to yours, considering factors like their region, the type of industry they operate in, and their size. This data provides insight into how likely your type of business will experience a breach and what that breach will likely cost. ... A third approach is to measure cybersecurity ROI in terms of the value you don't create due to breaches that do occur. This is effectively an inverse form of cybersecurity ROI. ... Using this data, you can predict how much money you'd save through additional cybersecurity spending.

Daily Tech Digest - April 11, 2025


Quote for the day:

"Efficiency is doing the thing right. Effectiveness is doing the right thing." -- Peter F. Drucker


Legacy to Cloud: Accelerate Modernization via Containers

What could be better than a solution that lets you run applications across environments without dependency constraints? That’s where containers come in. They accelerate your modernization journey. The containerization of legacy applications liberates them from the rusty old VMs and servers that limit the scalability and agility of applications. Containerization offers benefits including agility, portability, resource efficiency, scalability and security. ... migrating legacy applications to containers is not a piece of cake. It requires careful planning and execution. Unlike cloud native applications, which are built for containers and Kubernetes, legacy applications were not designed with containerization in mind. The process demands significant time and expertise, and organizations often struggle at the very first step. Legacy monoliths, with their tightly coupled components and complex dependencies, require particularly extensive Dockerfiles. Writing Dockerfiles for legacy monoliths is complex and error-prone, often becoming a significant bottleneck in the modernization journey. ... The challenge intensifies when documentation is outdated or missing, turning what should be a modernization effort into a resource-draining archaeological expedition through layers of technical debt.


Four paradoxes of software development

No one knows how long the job will take, but the customer demands a completion date. This, frankly, is probably the biggest challenge that software development organizations face. We simply can’t be certain how long any project will take. Sure, we can estimate, but we are almost always wildly off. Sometimes we drastically overestimate the time required, but usually we drastically underestimate it. For our customers, this is both a mystery and a huge pain. ... Adding developers to a late project makes it later. Known as Brooks’s Law, this rule may be the strangest of the paradoxes to the casual observer. Normally, if you realize that you aren’t going to make the deadline for filing your monthly quota of filling toothpaste tubes, you can put more toothpaste tube fillers on the job and make the date. If you want to double the number of houses that you build in a given year, you can usually double the inputs—labor and materials—and get twice as many houses, give or take a few. ... The better you get at coding, the less coding you do. It takes many years to gain experience as a software developer. Learning the right way to code, the right way to design, and all of the rules and subtleties of writing clean, maintainable software doesn’t happen overnight. ... Software development platforms and tools keep getting better, but software takes just as long to develop and run.


Drones are the future of cybercrime

The rapid evolution of consumer drone technology is reshaping its potential uses in many ways, including its application in cyberattacks. Modern consumer drones are quieter, faster, and equipped with longer battery life, enabling them to operate further from their operators. They can autonomously navigate obstacles, track moving objects, and capture high-resolution imagery or video. ... And there are so many other uses for drones in cyberattacks: Network sniffing and spoofing: Drones can be equipped with small, modifiable computers such as a Raspberry Pi to sniff out information about Wi-Fi networks, including MAC addresses and SSIDs. The drone can then mimic a known Wi-Fi network, and if unsuspecting individuals or devices connect to it, hackers can intercept sensitive information such as login credentials. Denial-of-service attacks: Drones can carry devices to perform local de-authentication attacks, disrupting communications between a user and a Wi-Fi access point. They can also carry jamming devices to disrupt Wi-Fi or other wireless communications. Physical surveillance: Drones equipped with high-quality cameras can be used for physical surveillance to observe shift changes, gather information on security protocols, and plan both physical and cyberattacks by identifying potential entry points or vulnerabilities. 


From Silos to Strategy: Why Holistic Data Management Drives GenAI Success

While data distribution is essential to mitigate risks, it requires a unified approach to be effective. Many enterprises are recognizing the value of implementing unified data architectures that simplify storage and data management and centralize the management of diverse data platforms. These architectures, combined with intelligent data platforms, enable seamless access and analysis of data, making it easier to support analytics and ingestion by generative AI. IT managers can further enhance a system’s data analysis, network security, and introduce a hybrid cloud experience to simplify data management. Today, the tech industry is focused on streamlining how enterprises manage and optimize storage, data, and workloads and a platform-based approach to hybrid cloud management is critical to manage IT across on-premises, colocation and public cloud environments. Innovations like unified control planes and, software-defined storage solutions are being utilized to enable seamless data and application mobility. These solutions allow enterprises to move data and applications across hybrid and multi-cloud environments to optimize performance, cost, and resiliency. By simplifying cloud data management, enterprises can efficiently manage and protect globally dispersed storage environments without over-emphasizing resilience at the expense of overall system optimization.


Why remote work is a security minefield (and what you can do about it)

The remote work environment makes employees more vulnerable to phishing and social engineering attacks, as they are isolated and may find it harder to verify suspicious activities. Working from home can create a sense of comfort that leads to relaxation, making employees more prone to risky security behavior. The isolation associated with remote work can also result in impulsive decisions, increasing the likelihood of mistakes. Cybercriminals exploit this by tailoring social engineering attacks to mimic IT staff or colleagues, taking advantage of the lack of direct verification. ... To address these challenges, organizations must prioritize a security-first culture. By prioritizing cybersecurity at every level, from executives to remote workers, organizations can reduce their vulnerability to cyber threats. Additionally, companies can foster peer support networks where employees can share security tips and collaborate on solutions. Another problem that can arise with remote work is privacy. Some companies monitor employee activity to protect their data and ensure compliance with regulations. Monitoring helps detect suspicious behavior and mitigate cyber threats, but it can raise privacy concerns, especially when it involves intrusive methods like tracking keystrokes or taking periodic screenshots. To find a good balance, companies should be upfront about what they’re monitoring and why. 


Inside a Cyberattack: How Hackers Steal Data

Once a hacker breaches the perimeter, the standard practice is to beachhead (dig down) and then move laterally to find the organization’s crown jewels: their most valuable data. Within a financial or banking organization, it is likely there is a database on their server that contains sensitive customer information. A database is essentially a complicated spreadsheet, wherein a hacker can simply click Select and copy everything. In this instance, data security is essential; many organizations, however, confuse data security with cybersecurity. Organizations often rely on encryption to protect sensitive data, but encryption alone isn’t enough if the decryption keys are poorly managed. If an attacker gains access to the decryption key, they can instantly decrypt the data, rendering the encryption useless. Many organizations also mistakenly believe that encryption protects against all forms of data exposure, but weak key management, improper implementation, or side-channel attacks can still lead to compromise. To truly safeguard data, businesses must combine strong encryption with secure key management, access controls, and techniques such as tokenization or format-preserving encryption to minimize the impact of a breach. A database protected by privacy enhancing technologies (PETs), such as tokenization, becomes unreadable to hackers if the decryption key is stored offsite. 


You’re always a target, so it pays to review your cybersecurity insurance

Right now, either someone has identified your firm and your weak spots and begun a campaign of targeted phishing attacks, scam links, or credential harvesting, or they are blindly trying to use any number of known vulnerabilities on the web to crack into remote access and web properties. ... Reviewing my compliance with cyber insurance policies was a great exercise in self-assessing just how thorough my base security is, but it also revealed an important fact: that insurance requirements only scratch the surface of the types of discussions you should be having internally regarding your risks of attack. No matter if you feel you are merely at risk of being accidental roadkill on the information superhighway or are actually in the crosshairs of a malicious attacker, always review the risks not only with your cyber insurance carrier in mind, but also with what the attackers are planning. ... During the annual renewal of cyber insurance, the insurance carrier would not even consider insuring my business if we did not demonstrate that we had some fundamental protections in place. Based on the questions and bullet points, you could tell they saw the remote access, third-party vendor access, and network administrator accounts as weak points that needed additional protection.


9 steps to take to prepare for a quantum future

To get ahead of the quantum cryptography threat, companies should immediately start assessing their environment. “What we’re advising clients to do – and working on with clients today – is first go and inventory your encryption algorithms and know what you’re using,” says Saylors. That can be tricky, he adds. ... Because of the complexity of the tasks, ISG’s Saylors suggest that enterprises prioritize their efforts. The first step, he says, is to look at perimeter security. The second step is to look at the encryption around the most critical assets. And the third step is to look at the encryption around data backups. All of this needs to happen as soon as possible. In fact, according to Gartner, enterprises should have created a cryptography database by the end of 2024. Companies should have created cryptography polices and planned their transition to post-quantum encryption by the end of 2024, the research firm says. ... So everything will have to be carefully tested and some cryptographic processes may need to be rearchitected. But the bigger problem is that the new algorithms might themselves be deprecated as technology continues to evolve. Instead, Horvath and other experts recommend that enterprises pursue quantum agility. If any cryptography is hard-coded into processes, it needs to be separated out. “Make it so that any cryptography can work in there,” he says. 


Why neurodivergent perspectives are essential in AI development

Experts in academia, civil society, industry, media, and government discussed and debated the latest developments in AI safety and ethics, but representation of neurodivergent perspectives in AI development wasn’t examined. This is a huge oversight especially considering 70 million people in the US alone learn and think differently, including many in tech. Technology should be built for and serve all, so how do we make sure future AI models are accessible and unbiased if neurodivergent representation isn’t considered? It all starts at the development stage. ... A neurodivergent team also makes it easier to explore a wider range of use cases and the risks associated with applications. When you engage neurodivergent people at the development stage, you create a team that understands and prioritizes diverse ways of thinking, learning, and working. And that benefits all users. ... New data from EY found that 85% of neurodivergent employees think gen AI creates a more inclusive workplace, so it’s incumbent on more companies to level the playing field by casting a wider net to include a broader range of employees and tools needed to thrive and generate more accurate and robust datasets. Gen AI can also go a long way to help neurodivergent workers with simple tasks like productivity, quality assurance, and time management. 


Your data's probably not ready for AI - here's how to make it trustworthy

"AI and gen AI are raising the bar for quality data," according to a recent analysis published by Ashish Verma, chief data and analytics officer at Deloitte US, and a team of co-authors. "GenAI strategies may struggle without a clear data architecture that cuts across types and modalities, accounting for data diversity and bias and refactoring data for probabilistic systems," the team stated. ... "Creating a data environment with robust data governance, data lineage, and transparent privacy regulations helps ensure the ethical use of AI within the parameters of a brand promise," said Clayton. Building a foundation of trust helps prevent AI from going rogue, which can easily lead to uneven customer experiences." Across the industry, concern is mounting over data readiness for AI. "Data quality is a perennial issue that businesses have faced for decades," said Gordon Robinson, senior director of data management at SAS. There are two essential questions on data environments for businesses to consider before starting an AI program, he added. First, "Do you understand what data you have, the quality of the data, and whether it is trustworthy or not?" Second, "Do you have the right skills and tools available to you to prepare your data for AI?"


Daily Tech Digest - April 10, 2025


Quote for the day:

"Positive thinking will let you do everything better than negative thinking will." -- Zig Ziglar



Strategies for measuring success and unlocking business value in cloud adoption

Transitioning to a cloud-based operation involves a dual-pronged strategy. While cost optimization, requires right-sizing resources, leveraging discounted instances, and implementing auto-scaling based on demand, accurately forecasting demand and navigating complex cloud pricing structures can be difficult. Likewise, while scalability is enabled by containerization, serverless computing, and infrastructure automation, managing complex applications, ensuring security during scaling, and avoiding vendor lock-in present additional challenges. Therefore, organizations must continuously monitor and adapt their strategies while addressing these challenges. ... An effective cloud strategy aligns business goals through a strong governance framework that prioritizes security, compliance, and cost optimization, while being flexible to accommodate growth. Piloting non-critical applications can help refine this strategy before larger migrations. ... Companies must first assess their maturity model to identify areas for improvement. This includes optimizing their cloud mix by exploring different cloud providers or cost structures, providing regular policy updates for compliance, cultivating a continuous improvement culture, proactively addressing challenges, and having active leadership involvement in the cloud vision for stakeholder buy-in.


Three Keys to Mastering High Availability in Your On-Prem Data Center

A cornerstone of high availability is the redundancy of IT infrastructure. By identifying potential critical single points of failure and, where possible, ensuring there is an option for failover to a secondary resource, you can reduce the risk of downtime in the event of an incident. Redundancy should extend across both hardware and software layers. Implementing failover clusters, resilient networking paths, storage redundancy using RAID, and offsite data replication for disaster recovery are proven strategies. Adopting a hybrid or multi-cloud approach can also reduce reliance on any single service provider. If you operate an off-site data center, ensure it is not dependent on the same power source as your main campus. Be sure to have a disaster recovery and business continuity plan that includes local and offsite backup storage. ... Whether your infrastructure is on-premises, cloud-based, or hybrid, the other key component to achieving high availability is the establishment of failover clusters to facilitate – and even automate – the movement of services and workloads to a secondary resource. Whether hardware (SAN-based) or software (SANless), clusters support the seamless failover of services to back up resources and ensure continuity in the event of a severely degraded performance or an outage incident.


Targeted phishing gets a new hook with real-time email validation

The problem facing defenders is the tactic prevents security teams from doing further analysis and investigation, says the Cofense report. Automated security crawlers and sandbox environments also struggle to analyze these attacks because they cannot bypass the validation filter, the report adds. ... “The only real solution,” he said, “is to move away from traditional credentials to phishing-safe authentication methods like Passkeys. The goal should be to protect from leaked credentials, not block user account verification.” Attackers verifying e-mail addresses as deliverable, or being associated with specific individuals, is nothing fundamentally new, he added. Initially, attackers used the mail server’s “VRFY” command to verify if an address was deliverable. This still works in a few cases. Next, attackers relied on “non-deliverable receipts,” the bounce messages you may receive if an email address does not exist, to figure out if an email address existed. Both techniques work pretty well to determine if an email address is deliverable, but they do not distinguish whether the address is connected to a human, or if its messages are read. The next step, Ullrich said, was sending obvious spam, but including an “unsubscribe” link. If a user clicks on the “unsubscribe” link, it confirms that the email was opened and read. 


Data Hurdles, Expertise Loss Hampering BCBS 239 Compliance

It was abundantly clear that there was a gulf between ECB expectations and banks’ delivery soon after BCBS 239 was introduced. In late 2018 the central bank found that 59 per cent of in-scope institutions turned in regulatory reports with at least one failing validation rule and almost 7 per cent of data points were missing from them. The ECB began a “supervisory strategy” in 2022 to close the gap, running until 2024. In May of that year it published a guide that clarified what the overseers expected of banks and embarked on targeted reviews of RDARR capabilities. ... The supervisor blamed “deficiencies” on governance shortcomings, fragmented IT infrastructures and a high level of manual aggregation processing, but admitted “remediation of RDARR deficiencies is often costly, carries significant risk and takes time”. Carroll said that the breadth of the data management effort needed to comply with BCBS 239 has slowed adoption of the capabilities necessary for compliance. “They’re spending so much time planning for BCBS and thinking about what they need to do and what they need to have in place, and the tools that they need and the frameworks that they might need to put in place,” he said. ... “Hindered by outdated IT systems unsuitable for modern data management functions, they struggle with data silos and inconsistent, inaccurate risk reporting,” Ergin told Data Management Insight.


Can We Learn to Live with AI Hallucinations?

Sometimes, LLMs hallucinate for no good reason. Vectara CEO Amr Awadallah says LLMs are subject to the limitations of data compression on text as expressed by the Shannon Information Theorem. Since LLMs compress text beyond a certain point (12.5%), they enter what’s called “lossy compression zone” and lose perfect recall. That leads us to the inevitable conclusion that the tendency to fabricate isn’t a bug, but a feature, of these types of probabilistic systems. What do we do then? ... Instead of using a general-purpose LLM, fine-tuning open source LLMs on smaller sets of domain- or industry-specific data can also improve accuracy within that domain or industry. Similarly, a new generation of reasoning models, such as DeepSeek-R1 and OpenAI o1, that are trained on smaller domain-specific data sets, include a feedback mechanism that allows the model to explore different ways to answer a question, the so-called “reasoning” steps. Implementing guardrails is another technique. Some organizations use a second, specially crafted AI model to interpret the results of the primary LLM. When a hallucination is detected, it can tweak the input or the context until the results come back clean. Similarly, keeping a human in the loop to detect when an LLM is headed off the rails can also help avoid some of LLM’s worst fabrications. 


How Technical Debt Can Quietly Kill Your Company — And the metrics that can save you

Beyond the direct financial drain, technical debt imposes a crippling operational gridlock. Development velocity plummets — Protiviti suggest significant slowdowns, potentially up to 30%, as teams battle complexity. For Product and Delivery, this means longer lead times, missed deadlines, reduced predictability, and a sluggish response to market changes. Each new feature built on a weak foundation takes longer than the last. Maintenance costs simultaneously escalate. Developers spend disproportionate time debugging obscure issues, patching old components, and managing complex workarounds. These activities can consume up to 40% of the total value of a technology estate over its lifetime — an escalating “maintenance tax” diverting focus from value creation. Crucially, technical debt is a major barrier to innovation. Nearly 70% of organizations acknowledge this according to Protiviti’s polls. When teams are constantly firefighting, constrained by legacy architecture, and navigating brittle code, their capacity for creative problem-solving and experimentation evaporates. The operational drag prevents exploration, limiting the company’s potential for growth and differentiation. Nokia’s decline serves as a stark cautionary tale of operational gridlock leading to strategic failure. Their dominance in mobile phones evaporated with the rise of smartphones.


How tech giants like Netflix built resilient systems with chaos engineering

Chaos Engineering is a discipline within software engineering that focuses on testing the limits and vulnerabilities of a system by intentionally injecting chaos—such as failures or unexpected events—into it. The goal is to uncover weaknesses before they impact real users, ensuring that systems remain robust, self-healing, and reliable under stress. The idea is based on the understanding that systems will inevitably experience failures, whether due to hardware malfunctions, software bugs, network outages, or human error. ... Netflix is widely regarded as one of the pioneers in applying Chaos Engineering at scale. Given its global reach and the importance of providing uninterrupted service to millions of users, Netflix knew that simply assuming everything would work smoothly all the time was not an option. Its microservices architecture, a collection of loosely coupled services, meant that even the smallest failure could cascade and result in significant downtime for its customers. The company wanted to ensure that it could continue to stream high-quality video content, provide personalized recommendations, and maintain a stable infrastructure—no matter what failure scenarios might arise. To do so, Netflix turned to Chaos Engineering as a cornerstone of its resilience strategy.


The AI model race has suddenly gotten a lot closer, say Stanford scholars

Bommasani and team don't make any predictions about what happens next in the crowded field, but they do see a very pressing concern for the benchmark tests used to evaluate large language models. Those tests are becoming saturated -- even some of the most demanding, such as the HumanEval benchmark created in 2021 by OpenAI to test models' coding skills. That affirms a feeling seen throughout the industry these days: It's becoming harder to accurately and rigorously compare new AI models. ... In response, note the authors, the field has developed new ways to construct benchmark tests, such as Humanity's Last Exam, which has human-curated questions formulated by subject-matter experts; and Arena-Hard-Auto, a test created by the non-profit Large Model Systems Corp., using crowd-sourced prompts that are automatically curated for difficulty. ... Bommasani and team conclude that standardizing across benchmarks is essential going forward. "These findings underscore the need for standardized benchmarking to ensure reliable AI evaluation and to prevent misleading conclusions about model performance," they write. "Benchmarks have the potential to shape policy decisions and influence procurement decisions within organizations, highlighting the importance of consistency and rigor in evaluation."


From likes to leaks: How social media presence impacts corporate security

Cybercriminals can use social media to build a relationship with employees and manipulate them into performing actions that jeopardize corporate security. They can impersonate colleagues, business partners, or even executives, using information obtained from social media to sound convincing. ... Many employees use the same passwords for personal social media accounts as for their work accounts, putting corporate data at risk. While convenient, this practice means that if a personal account is compromised, attackers could gain access to work-related systems as well. ... CISOs must now account for employee behavior beyond the firewall. The attack surface no longer ends at corporate endpoints; it stretches into LinkedIn profiles, Instagram vacation posts, and casual tweets. Companies should establish policies regarding what employees are permitted to post on social media, especially about their work and workplace. ... The problem with social media posts is there is a thin line between privacy and company security. CISOs have to walk a thin line, keeping the company secure without policing what employees do on their own time. This is why privacy awareness training should be integrated with cybersecurity policies.


Tariffs will hit data centers and cloud providers, but hurt customers

The tariffs applied vary country to country - with a baseline of 10 percent placed on all imported goods coming into the US - and much higher being applied to those countries described by Trump as “the worst offenders," up to 99 percent in the case of the French archipelago Saint Pierre and Miquelon. However, most pertinent to the cloud computing industry are the tariffs that will hit countries that provide essential computing hardware, and materials necessary to data center construction. ... While cloud service providers (CSPs) will certainly be hit by the inevitable rising costs, it is hard to really think of the hyperscalers as the "victims" in this story. Microsoft, Amazon, and Alphabet all lie in the top five companies by market cap, and none have taken particularly drastic hits to their stock value since the news of the tariffs was announced. ... "The high tariffs on servers and other IT equipment imported from China and Taiwan are highly likely to increase CSPs costs. If CSPs pass on cost increases, customers may feel trapped (because of lock-in) and disillusioned with cloud and their provider (because they've committed to building on a cloud provider assuming costs would be constant or even decline over time). On the other hand, if CSPs don't increase prices with rising costs, their margins will decline. It's a no-win situation," Rogers explained.


Daily Tech Digest - April 09, 2025


Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson



How AI and ML Will Change Financial Planning

AI adoption in finance does not come easily, because finance systems contain vast amounts of sensitive data, they are more susceptible to data breaches. Integrating AI systems with other components, such as cloud services and APIs, can increase the number of entry points that hackers might exploit. Hence, most of the finance executives cite data security as a top challenge. Limited AI skills is another hurdle, most of the finance orgs don’t have the skill set which leverage the AI in planning and budgeting activities. In early stages, high costs, staff resistance, lack of transparency, and uncertain ROI dominate. Other hurdles stay constant, such as data security and finding consistent data. As companies expand their use of AI, the potential for bias and misinformation rises, particularly as finance teams tap GenAI. Integrating AI solutions and tools into existing systems also presents more challenges As AI and ML continue to evolve, their role in financial planning will only grow. The ability to continuously adapt to new data, automate routine processes, and generate predictive insights positions AI as a critical tool for financial leaders. By embracing these technologies, businesses can transition from reactive financial management to proactive, data-driven decision-making that not only mitigates risks but also identifies new opportunities for growth.


The Augmented Architect: Real-Time Enterprise Architecture In The Age Of AI

No human can know everything about a modern digital enterprise. AI doesn’t pretend to either — but it remembers everything and brings the right detail to the fore at the right time. Think of it as a cognitive prosthetic for the architect: surfacing precedents, warnings, and rationale at the point of decision. ... Visibility isn’t just about having access to data — it’s about trust in its freshness. Real-time integration with operational sources (observability platforms, configuration systems, source control, deployment records) ensures that the architecture graph is never out of date. The haystack becomes a needle-sorter. ... Architecture artifacts multiply: PowerPoints, spreadsheets, PDFs, whiteboards. But in an agentic system, everything is rendered on demand from the same graph (and its associated unstructured content, linked via vector embeddings). Want a heatmap of system risks? A regulatory trace? A roadmap to sunset legacy? One prompt, one view — consistent, explainable, and composable. And those unstructured artifacts? An agent is happy to harvest new insights from them back into the knowledge store. ... Review boards become decision accelerators instead of speed bumps. Agents pre-check submissions. Exceptions, not compliance, become the focus. Draft decisions are generated and validated before the meeting even starts. 


Choosing the Most Secure Cloud Service for Your Workloads

Managed cloud servers offer the security benefit of being relatively simple to configure and operate. Simplicity breeds security because the fewer variables you have to work with, the lower the risk of making a mistake that will lead to a breach. On the other hand, managed cloud servers are subject to a relatively large attack surface. Threat actors could target multiple components, including the operating systems installed on server instances, individual applications, and network-facing services. ... If you deploy containers using a managed service like AWS Fargate or GKE, you get many of the same security advantages as you enjoy when using serverless functions: The only vulnerabilities and misconfigurations you have to worry about are ones that impact your containers. The cloud provider bears responsibility for securing the host infrastructure. This isn't true, however, if you deploy containers on infrastructure that you manage yourself — by, for example, creating a Kubernetes cluster using nodes hosted on EC2. In that case, you end up with a broad and complex environment, making it quite challenging to secure. ... Note, too, that containers tend to be complex. A single container image could include code drawn from many sources. 


The Invisible Data Battle: How AI Became a Cybersec Professional’s Biggest Friend and Foe

With all of these boobytraps and stonewalling techniques in mind, cybersec professionals have been working on smart scrapers for years, and they’re finally here. A “smart” or “adaptive” scraper uses natural language processing (NLP) and machine learning to handle dynamic content and intricate website architectures (e.g., nested categories and varied page layouts), bypass IP blocking and rate limiting via rotating proxies, deal with CAPTCHAs, login forms and cookies — and even provide real-time data updates. For instance, adaptive scrapers can identify the structure of a web page by analyzing its document object model (DOM) or by following specific patterns, and this allows for dynamic adaptation. AI models like convolutional neural networks (CNNS) can also detect and interact with visual elements on websites, such as buttons. In fact, smart scrapers can even mimic human browsing patterns with random pauses, mouse movements and realistic navigation sequences that bypass behavioral analysis tools. And that’s not all. AI-powered web scrapers can modify browser configurations to mask telltale signs of automation (such as headless browsers that run without a traditional graphical interface) that anti-bot systems look for. 


The Agile Advantage: doubling down on the biggest business challenges

Agile practices have been gaining popularity, with 51% of respondents indicating their organisations actively use Agile to organize and deliver work. However, the data reveals inconsistencies in how the benefits of Agile are perceived across teams and organisations. ... Regardless of whether teams fully embrace Agile practices completely, there are opportunities for leaders to bring forward Agile principles to address the unique challenges of modern work. While leaders may feel confident in their teams’ direction, the lack of alignment experienced by entry-level employees can have serious repercussions. Feedback from these employees can serve as a valuable indicator of how effectively an organisation integrates Agile practice–and the data clearly shows there is considerable room for improvement. For organisations of any size, addressing these gaps is imperative. Leaders must adopt consistent tools and frameworks that enhance training, improve communication and foster greater alignment across teams. Proactively tackling these issues early can alleviate future issues like misalignment and burnout, while building a more cohesive and resilient organisation. 


The Strategic Evolution of IT: From Cost Center to Business Catalyst

The most successful organizations recognize that technology-driven transformation requires more than just implementing new solutions — it demands an organization-wide cultural shift. This means evolving IT teams from traditional "order-takers" to influential decision-makers who help shape and execute business strategy. The key lies in creating an environment where innovation thrives and tech professionals feel empowered to contribute their unique perspectives to business discussions. Organizations must invest in both the technical and business acumen of their IT talent. A dual focus on these areas enables teams to better understand the broader business context of their work and contribute more meaningfully to strategic discussions. When IT professionals can speak the languages of both technology and business, they become invaluable partners in driving broader innovation. Success in this area requires a commitment to continuous learning, mentorship programs and creating opportunities for cross-functional collaboration that expose IT teams to diverse business challenges and perspectives. ... With technology continuing to reshape industries and markets, the question is no longer whether tech professionals should have a seat at the strategic table, but how to maximize its potential and impact on business success.


Is HR running your employee security training? Here’s why that’s not always the best idea

“HR departments may not be fully aware of current cyber threats or the organization’s specific risks,” she says. This can result in overly broad or generic training, which reduces its effectiveness. These programs can also fail to emphasize the practical, real-world application of security practices or offer enough guidance on addressing threats if they lack collaboration with security and IT teams.” HR may not effectively tailor the training to the organization’s industry-specific threats, Murphy notes. Without the security department’s involvement, training content often lacks focus and fails to address the company’s unique threats, leaving employees unsure of what to watch for. ... However, while HR shouldn’t run employee security training, Willett does view the HR team as a key partner. He suggests a collaborative approach where HR and security teams work together, leveraging their respective strengths. He explains that HR can help translate complex technical information into understandable language, while the security team provides the core content and technical expertise. ... HR has skin in the game for employee onboarding, compliance, and adherence to company policies and practices, according to Hughes. 


Why CISOs are doubling down on cyber crisis simulations

“It was once enough to theorise risk identification through using risk matrixes and lodging them in a spreadsheet describing threats and their likelihood of materialising,” says Aaron Bugal, Field CISO, APJ at Sophos. “However, looking at the impact caused by ransomware and subsequent extortion demands sending executive teams and board members into a spin, highlights the lack of understanding of how pervasive cyber criminals are and the opportunities they take.” To move beyond theoretical planning, Bugal advocates for breach simulations as a practical step forward. “A simulation of a breach will allow you to draw out the concise and well-measured response actions that are demanded by you and your organisation,” he explains. Bringing together a cross-section of executives helps uncover gaps in readiness. “Physically sitting with a cross section of executives, board members, human resources, IT, security, legal and public relations will ilk out the procedures, responsibilities and resources needed to respond with efficacy.” By running these exercises in advance, organizations can avoid the chaos of real-time crisis management. “Simulations provide a structured approach to build and refine a breach response while playing it out and discovering where improvements are needed,” Bugal adds, “rather than learning and panicking whilst under the pressure of an active attack.”


Google Cloud Security VP on solving CISO pain points

On the strategic side, Bailey said CISOs are asking for a middle ground between highly integrated platforms and the flexibility of best-of-breed tools. "They want best of breed with the limited toil of what a platform gives," he said. "They're tired of integrations constantly breaking." Bailey also discussed how the role of development-level security – often called DevSecOps – is increasingly being absorbed into security operations. "The CISO is going to have responsibility for all these problems," he said. "Visibility into what's being deployed, compliance reporting, and detection on application code – that's all coming into SecOps." Another emerging front is model protection. Google's Model Armour and AI Protection aim to defend not just infrastructure but also the AI models themselves. "If a bad prompt starts coming through, we can help block that," Bailey said. "We're putting security controls around development environments, models, data and prompts." The Mandiant brand, once synonymous with incident response, has found new life as both a consulting arm and a foundation for content in Google Threat Intelligence. "Mandiant is our consulting practice," Bailey said. "It's also where our elite threat hunters live – a lot of them are ex-Mandiant, and they're integrated with our consulting team to operationalise what they see on the front lines."


Shadow Table Strategy for Seamless Service Extractions and Data Migrations

The shadow table strategy maintains a parallel copy of data in a new location (the "shadow" table or database) that mirrors the original system’s current state. The core idea is to feed data changes to the shadow in real time, so that by the end of the migration, the shadow data store is a complete, up-to-date clone of the original. At that point, you can seamlessly switch to the shadow copy as the primary source. ... Transitioning from a monolithic architecture to a microservices-based system requires more than just rewriting code; you often must carefully migrate data associated with specific services. Extracting a service from a monolith risks inaccuracy if you do not transfer its dependent data accurately and consistently. Here, shadow tables play a crucial role in decoupling and migrating a subset of data without disrupting the existing system. In a typical service extraction, the legacy system continues to handle all live operations while developers build a new microservice to handle a specific functionality. During extraction, engineers mirror the data relevant to the new service into a dedicated shadow database. Whether implemented through triggers or event-based replication, the dual-write mechanism ensures that the system simultaneously records every change made in the legacy system in the shadow database.

Daily Tech Digest - April 08, 2025


Quote for the day:

"Individual commitment to a group effort - that is what makes a team work, a company work, a society work, a civilization work." -- Vince Lombardi



AI demands more software developers, not less

Entry-level software development will change in the face of AI, but it won’t go away. As LLMs increasingly handle routine coding tasks, the traditional responsibilities of entry-level developers—such as writing boilerplate code—are diminishing. Instead their roles will evolve into AI supervisors; they’ll test outputs, manage data labeling, and integrate code into broader systems. This necessitates a deeper understanding of software architecture, business logic, and user needs. Doing this effectively requires a certain level of experience and, barring that, mentorship. The dynamic between junior and senior engineers is shifting. Seniors need to mentor junior developers in AI tool usage and code evaluation. Collaborative practices such as AI-assisted pair programming will also offer learning opportunities. Teams are increasingly co-creating with AI; this requires clear communication and shared responsibilities across experience levels. Such mentorship is essential to prevent more junior engineers from depending too heavily on AI, which results in shallow learning and a downward spiral of productivity loss. Across all skill levels, companies are scrambling to upskill developers in AI and machine learning. A late-2023 survey in the United States and United Kingdom showed 56% of organizations listed prowess in AI/ML as their top hiring priority for the coming year. 


Ask a CIO Recruiter: How AI is Shaping the Modern CIO Role

Everything right now revolves around AI, but you still as CIO have to have that grounding in all of the traditional disciplines of IT. Whether that is systems, whether that’s infrastructure, whether that’s cybersecurity, you have to have that well-rounded background. Even as these AI technologies become more prolific, you must consider your past infrastructure spend, your cloud spend, that went into these technologies. How do you manage that? If you don’t have grounding in managing those costs, and being able to balance those costs with the innovation you are trying to create, that’s a recipe for failure on the cyber side. ... When we’re looking for skill sets, we’re looking for people who have actually taken those AI technologies and applied them within their organizations to create real business value -- whether that is cost savings or top-line revenue creation, whatever those are. It’s hard to find those candidates, because there are a lot of those people who can talk the talk around AI, but when you really drill down there is not much in terms of results to show. It’s new, especially in applying the technology to certain settings. Take manufacturing: there’s not that many CIOs out there who have great examples of applying AI to create value within organizations. It’s certainly accelerating, and you’re going to see it accelerating more as we go into the future. It’s just so new that those examples are few and far between.


Architectural Experimentation in Practice: Frequently Asked Questions

When the cost of reversing a decision is low or trivial, experimentation does not reduce cost very much and may actually increase cost. Prior experience with certain kinds of decisions usually guides the choice; if team members have worked on similar systems or technical challenges, they will have an understanding of how easily a decision can be reversed. ... Experiments are more than just playing around with technology. There is a place for playing with new ideas and technologies in an unstructured, exploratory way, and people often say that they are "experimenting" when they are doing this. When we talk about experimentation, we mean a process that involves forming a hypothesis and then building something that tests this hypothesis, either accepting or rejecting it. We prefer to call the other approach "unstructured exploratory learning", a category that includes hackathons, "10% Time", and other professional development opportunities. ... Experiments should have a clear duration and purpose. When you find an experiment that’s not yielding results in the desired timeframe, it’s time to stop it and design something else to test your hypothesis that will yield more conclusive results. The "failed" experiment can still yield useful information, as it may indicate that the hypothesis is difficult to prove or may influence subsequent, more clearly defined experiments.


Optimizing IT with Open Source: A Guide to Asset Management Solutions

Orchestration frameworks are crucial for developing sophisticated AI applications that can perform tasks beyond simply answering a single question. While a single LLM is proficient in understanding and generating text, many real-world AI applications require performing a series of steps involving different components. Orchestration frameworks provide the structure necessary to design and manage these complex workflows, ensuring that all the various components of the AI system work together efficiently. ... One way orchestration frameworks enhance the power of LLMs is through a technique known as “prompt chaining.” Think of it as telling a story one step at a time. Instead of giving the LLM a single, lengthy instruction, you provide it with a series of more minor, interconnected instructions known as prompts. The response from one prompt then becomes the starting point for the following prompt, guiding the LLM through a more complex thought process. Open-source orchestration frameworks make it much simpler to create and manage these chains of prompts. They often provide tools that allow developers to easily link prompts together, sometimes through visual interfaces or programming tools. Prompt chaining can be helpful in many situations. 


Reframing DevSecOps: Software Security to Software Safety

A templatized, repeatable, process-led approach, driven by collaboration between platform and security teams, leads to a fundamental shift in the way teams think about their objectives. They move from the concept of security, which promises a state free from danger or threat, to safety, which focuses on creating systems that are protected from and unlikely to create danger. This shift emphasizes proactive risk mitigation through thoughtful, reusable design patterns and implementation rather than reactive threat mitigation. ... The outcomes between security products and product security are vastly different with the latter producing far greater value. Instead of continuing to shift responsibilities, development teams should embrace the platform security engineering paradigm. By building security directly into shared processes and operations, development teams can scale up to meet their needs today and in the future. Only after these strong foundations have been established should teams layer in routinely run security tools for assurance and problem identification. This approach, combined with aligned incentives and genuine collaboration between teams, creates a more sustainable path to secure software development that works at scale.


10 things you should include in your AI policy

A carefully thought AI use policy can help a company set criteria for risk and safety, protect customers, employees, and the general public, and help the company zero in on the most promising AI use cases. “Not embracing AI in a responsible manner is actually reducing your advantage of being competitive in the marketplace,” says Bhrugu Pange, managing director who leads the technology services group at AArete, a management consulting firm. ... An AI policy needs to start with the organization’s core values around ethics, innovation, and risk. “Don’t just write a policy to write a policy to meet a compliance checkmark,” says Avani Desai, CEO at Schellman, a cybersecurity firm that works with companies on assessing their AI policies and infrastructure. “Build a governance framework that’s resilient, ethical, trustworthy, and safe for everyone — not just so you have something that nobody looks at.” Starting with core values will help with the creation of the rest of the AI policy. “You want to establish clear guidelines,” Desai says. “You want everyone from top down to agree that AI has to be used responsibly and has to align with business ethics.” ... Taking a risk-based approach to AI is a good strategy, says Rohan Sen, data risk and privacy principal at PwC. “You don’t want to overly restrict the low-risk stuff,” he says. 


FedRAMP's Automation Goal Brings Major Promises - and Risks

FedRAMP practitioners, federal cloud security specialists and cybersecurity professionals who spoke to Information Security Media Group welcomed the push to automate security assessments and streamline approvals. They warned that without clear details on execution, the changes risk creating new uncertainties in the process and disrupt companies midway through the exiting process. Program officials said they will establish a series of community working groups to serve as a platform for industry and the public to engage directly with FedRAMP experts and collaborate on solutions that meet its standards and policies. "This is both exciting and scary," said John Allison, senior director of federal advisory services for the federal cybersecurity solutions provider, Optiv + ClearShark. "As someone who works with clients on their FedRAMP strategy, this is going to open new options for companies - but I can see a lot of uncertainty weighing heavily on corporate leadership until more details are available." Automation may help reduce costs and timelines, he said, but companies mid-process could face disruption and agencies will shoulder more responsibility until new tools are in place. Allison said GSA could further streamline FedRAMP by allowing cloud providers to submit materials directly and pursue authorization without an agency sponsor.


Is hyperscaler lock-in threatening your future growth?

Infrastructure flexibility has increasingly become a competitive differentiator. Enterprises that maintain the ability to deploy workloads across multiple environments—whether hyperscaler, private cloud, or specialized provider—gain strategic advantages that extend beyond operational efficiency. This cloud portability empowers organizations to select the optimal infrastructure for each application and workload based on their specific requirements rather than provider limitations. When a new service emerges that delivers substantial business value, companies with diversified infrastructure can adopt it without dismantling their existing technology stack. Central to maintaining this flexibility is the strategic adoption of open source technologies. Enterprise-grade open source solutions provide the consistency and portability that proprietary alternatives cannot match. By standardizing on technologies like Kubernetes for container orchestration, PostgreSQL for database services, or Apache Kafka for event streaming, organizations create a foundation that works consistently across any infrastructure environment. The most resilient enterprises approach their technology stack like a portfolio manager approaches investments—diversifying strategically to maximize returns while minimizing exposure to any single point of failure.


7 risk management rules every CIO should follow

The most critical risk management rule for any CIO is maintaining a comprehensive, continuously updated inventory of the organization’s entire application portfolio, proactively identifying and mitigating security risks before they can materialize, advises Howard Grimes, CEO of the Cybersecurity Manufacturing Innovation Institute, a network of US research institutes focusing on developing manufacturing technologies through public-private partnerships. That may sound straightforward, but many CIOs fall short of this fundamental discipline, Grimes observes. ... Cybersecurity is now a multi-front war, Selby says. “We no longer have the luxury of anticipating the attacks coming at us head-on.” Leaders must acknowledge the interdependence of a robust risk management plan: Each tier of the plan plays a vital role. “It’s not merely a cyber liability policy that does the heavy lifting or even top-notch employee training that makes up your armor — it’s everything.” The No. 1 way to minimize risk is to start from the top down, Selby advises. “There’s no need to decrease cyber liability coverage or slack on a response plan,” he says. Cybersecurity must be an all-hands-on-deck endeavor. “Every team member plays a vital role in protecting the company’s digital assets.” 


Shift-Right Testing: Smart Automation Through AI and Observability

Shift-right testing goes beyond the conventional approach of performing pre-release testing, thereby enabling the development teams to deploy the software in real-time conditions. This approach includes canary releases where new features are released to a subset of users before the full launch. It also involves A/B testing, where two versions of the application are compared in real time. Another important feature is chaos engineering, which implies that failures are deliberately introduced to check the strength of the system. ... Chaos engineering is the practice of injecting controlled failures into the system to assess its robustness with the help of tools like Chaos Monkey and Gremlin. This helps validate the actual behavior of a system in a production-like environment. All the testing feedback loops are also automated to ensure that Shift-Right is applied consistently by using AI-powered test analytics tools like Testim and Applitools to learn from test case selection. This makes it possible to use production data to inform the automatic generation of test suites, thus increasing coverage and precision. Real-time alerting and self-healing mechanisms also enhance shift-right testing. Observability tools can be set up to send out alerts whenever a test fails and auto-remediation scripts to enable the environment to repair itself when test environments fail without the need to involve the IT staff.