Daily Tech Digest - March 25, 2025


Quote for the day:

“Only put off until tomorrow what you are willing to die having left undone.” -- Pablo Picasso


Why FinOps Belongs in Your CI/CD Workflow

By codifying FinOps governance policies, teams can put guardrails in place while still granting developers autonomy to create resources. Guardrails don’t stifle innovation — they’re simply there to prevent costly mistakes. Every engineer makes mistakes, but guardrails ensure that those mistakes don’t lead to $10K-per-day cloud bills due to an overlooked database instance in a Terraform template taken off of GitHub. Additionally, policy enforcement must be dynamic and flexible, allowing organizations to adjust tagging, cost constraints and security requirements as they evolve. AI-driven governance can scale policy enforcement by identifying repeatable patterns and automating compliance checks across environments. ... Shifting left in FinOps isn’t just about cost visibility — it’s about ensuring cost efficiency is enforced as code, and continuously on your production systems. Legacy cost analysis tools provide visibility into cloud spending but rarely offer actionable cleanup recommendations. This includes actionable insights for cloud waste reduction, ensuring that predefined cost-saving policies highlight underutilized or orphaned resources while automated cleanup workflows help reclaim unused infrastructure.


How AI is changing cybersecurity for better or worse

“Agentic AI, capable of independently planning and acting to achieve specific goals, will be exploited by threat actors,” Lohrmann says. “These AI agents can automate cyberattacks, reconnaissance and exploitation, increasing attack speed and precision.” Malicious AI agents might adapt in real-time, bypassing traditional defenses and enhancing the complexity of attacks, Lohrmann says. AI-driven scams and social engineering will surge, Lohrmann says. “AI will enhance scams like ‘pig butchering’ — long-term financial fraud — and voice phishing, making social engineering attacks harder to detect,” he says. ... AI can also benefit organizations’ cybersecurity programs. “In general, AI-enabled platforms can provide a more robust, technology-backed line of defense against threat actors,” Cullen says. “Because AI can process huge amounts of data, it can provide faster and less obvious alerts to these threats.” Cybersecurity teams need to “fight fire with fire” by detecting and stopping threats with AI tool sets, Lohrmann says. For example, with new AI-enabled tools employee actions such as inappropriate clicking on links, sending emails to the wrong people, and other policy violations can be detected and stopped before a breach occurs.


Learning AI governance lessons from SaaS and Web2

Autonomous systems are advancing quickly, with the emergence of agents capable of communicating with each other, executing complex tasks, and interacting directly with stakeholders developing. While these autonomous systems introduce exciting new use cases, they also create substantial challenges. For example, an AI agent automating customer refunds might interact with financial systems, log reason codes for trends analysis, monitor transactions for anomalies, and ensure compliance with company and regulatory policies — all while navigating potential risks like fraud or misuse. ... Early SaaS and Web2 companies often relied on reactive strategies to address governance issues as they emerged, adopting a “wait and see” approach. SaaS companies focused on basics like release sign-offs, access controls, and encryption, while Web2 platforms struggled with user privacy, content moderation, and data misuse. This reactive approach was costly and inefficient. SaaS applications scaled with manual processes for user access management and threat detection that strained resources. ... A continuous, automated approach is the key to effective AI governance. By embedding tools that enable these features into their operations, companies can proactively address reputational, financial, and legal risks while adapting to evolving compliance demands.


7 types of tech debt that could cripple your business

As a software developer, writing code feels easier than reviewing someone else’s and understanding how to use it. Searching and integrating open source libraries and components can be even easier, as the weight of long-term support isn’t at the top of many developers’ minds when they are pressured to meet deadlines and deploy frequently. ... “The average app contains 180 components, and failing to update them leads to bloated code, security gaps, and mounting technical debt. Just as no one wants to run mission-critical systems on decade-old hardware, modern SDLC and DevOps practices must treat software dependencies the same way — keep them updated, streamlined, and secure.” ... CIOs with sprawling architectures should consider simplifications and one step to establish architectural observability practices. These include creating architecture and platform performance indicators by aggregating application-level monitoring, observability, code quality, total costs, DevOps cycle times, and incident metrics as a tool to evaluate where architecture impacts business operations. ... Joe Byrne, field CTO of LaunchDarkly, says, “Cultural debt can have several negative impacts, but specific to AI, a lack of proper engineering practices, resistance to innovation, tribal knowledge gaps, and failure to adopt modern practices all create significant roadblocks to successfully leveraging AI.”


Why people are the key to successful cloud migration

The consequences of overlooking the human element are significant. According to McKinsey’s research, European companies are five times more likely than their US counterparts to pursue an IT-led cloud migration, focusing primarily on ‘lifting and shifting’ existing workloads rather than transforming how people work. This approach might explain why many organisations are seeing limited returns on their investment. Migration creates a good opportunity to review methods and processes while ensuring teams have the tools they need to work efficiently. both human impact and technological enablement, even the most technically sound migration can fail to deliver the desired results. ... The true value of cloud transformation extends far beyond technical metrics and cost savings. Organisations need to track employee satisfaction and engagement levels alongside traditional technical key performance indicators (KPIs). This includes monitoring adoption rates of new tools, time saved through improved processes, and skill development achievements. Business impact measures should encompass customer satisfaction, process efficiency improvements, and innovation metrics. Long-term value indicators such as employee retention rates, internal mobility, and team productivity provide a more complete picture of transformation success. 


Evolving Technology and Corporate Culture Toward Autonomous IT and Agentic AI

Corporate culture will shape how seamlessly and effectively the modernization effort toward a more autonomous and intelligent enterprise operation will unfold. The best approaches align technology and culture along a structured journey model — assessing both the IT and workforce needs around data maturity, process automation, AI readiness, and success metrics. Such efforts can quickly propel organizations toward the largely self-sustaining capabilities and ecosystem of Agentic AI and autonomic IT. As IT teams become more comfortable relying on AI, machine learning, predictive analytics, and automation, they can begin to turn their attention to unlocking the power of Agentic AI. The term refers to advanced scenarios where machine and human resources blend to create an AI assistant capable of delivering accurate predictions, tailored recommendations, and intelligent automations that drive business efficiency and innovation. Such systems leverage generative AI and unsupervised ML combined with human-in-the-loop automation training models to revolutionize IT operations. Relinquishing the responsibility of mundane, repetitive tasks, IT teams can begin to reap the benefits of autonomic IT — a seamlessly integrated ecosystem of advanced technologies designed to enhance IT operations.


Building a Data Governance Strategy

In implementing a data strategy, a company can face several obstacles, including:Cultural resistance: Cultural resistance emerges throughout the DG journey, from initial strategy discussions through implementation and beyond. Teams and departments may resist changes to their established processes and workflows, requiring sustained change management efforts and clear communication of benefits. Lack of Resources: Viewing governance solely through a compliance lens leads to underinvestment, with 54% of data and analytics professionals finding the biggest hurdle is a lack of funding for their data programs. In the meantime, the demands of data governance have increased significantly due to a complex and evolving regulatory landscape and accelerated digital transformation where businesses must rely heavily on data-driven systems. Scalability: Modern enterprises must manage data across an increasingly complex ecosystem of cloud platforms, personal devices, and decentralized systems. This dispersed data environment creates significant challenges for maintaining consistent governance practices and data quality. Demands for unstructured data: The growing demand for AI-driven insights requires organizations to govern increasing volumes of unstructured data, including videos, emails, documents, and images. 


How CISOs can meet the demands of new privacy regulations

The responsibility for implementing and documenting privacy controls and policies falls primarily on the shoulders of the CISO, who must ensure that the organization’s procedures for managing information protects privacy data and meets regulatory requirements. Performing risk assessments that identify weaknesses and demonstrate that they are being addressed is a crucial step in the process, even more so now that they must be ready to produce risk assessments whenever regulatory bodies request them. As if CISOs needed an added incentive, regulators at the state and federal levels have been trending toward targeting organization management, particularly CISOs, in the wake of costly breaches. The consequences include hefty fines for organizations and, in worst-case scenarios, even jail sentences for CISOs. Responsibility for privacy protections also extends to third-party risks. Organizations can’t afford to rely solely on promises made by third-party providers because regulators and state attorneys generally can hold an organization responsible for a breach, even if the exploited vulnerability belonged to a provider. Organizations need to implement a framework for third-party risk management that includes performing due diligence on the security postures of third parties.


Guess Who’s Hiding in Your Supply Chain

There are plenty of high-profile attacks that demonstrate how hackers use the supply chain to access their target organisation. One of the most notable attacks on a supply chain was on SolarWinds, where hackers deployed malicious code into its IT monitoring and management software, enabling them to reach other companies within the supply chain. Once hackers were inside, they were able to compromise data, networks and systems of thousands of public and private organisations. This included spying on government agencies, in what became a major breach to national security. Government departments noticed that sensitive emails were missing from their systems and major private companies such as Microsoft, Intel, and Deloitte were also affected. With internal workings exposed, hackers could also gain access to data and networks of customers and partners of those originally affected, allowing the attack to spiral in impact and affect thousands of organisations. Visibility is key to guard against future attacks – without it an organisation can’t effectively or reliably identify suspicious activity. ... When you put this into perspective, it becomes unfathomable the amount of damage a cyber intruder could cause. Security teams must deploy a multi-layered arsenal of tools and tactics to cover their bases and should provision identities with only as much access as is absolutely necessary.


11 ways cybercriminals are making phishing more potent than ever

Brand impersonation continues to be a favored method to trick users into opening a malicious file or entering their details on a phishing site. Threat actors typically impersonate major brands, including document sharing platforms such as Microsoft’s OneDrive and SharePoint, and, increasingly frequently, DocuSign. Attackers exploit employees’ inherent trust in commonly used applications by spoofing their branding before tricking recipients into entering credentials or approving fraudulent document requests. ... Another significant phishing evolution involves abusing trusted services and content delivery platforms. Attackers are increasingly using legitimate document-signing and file-hosting services to distribute phishing lures. They first upload malicious content to a reputable provider, then craft phishing emails or messages that reference these trusted services and content delivery platforms. “Since these services host the attacker’s content, vigilant users who check URLs before clicking may still be misled, as the links appear to belong to legitimate and well-known platforms,” warns Greg ... Image-based phishing is becoming more complex. For example, fraudsters are crafting images to look like a text-based emails to improve their apparent authenticity, while still bypassing conventional email filters.


Daily Tech Digest - March 24, 2025


Quote for the day:

"To be an enduring, great company, you have to build a mechanism for preventing or solving problems that will long outlast any one individual leader" -- Howard Schultz



Identity Authentication: How Blockchain Puts Users In Control

One key benefit of blockchain is that it's decentralized. Instead of a single database that records user information -- one ripe for data breaches -- blockchain uses something called decentralized identifiers (DIDs). DIDs are cryptographic key pairs that allow users to have more control over their online identities. They are becoming more popular, with Forbes claiming they're the future of online identity. To explain what DIDs are, let's start by explaining what they are not. Today, most people interact online via a centralized identifier, such as an email address, username or password. This allows the database to store your digital information on that platform. But single databases are more vulnerable to data breaches and users have no control over their data. When we use centralized platforms, we really hand over all our trust to whatever platform we use. DIDs provide a new way to access information while allowing users to maintain ownership. ... That said, identity authentication and blockchain technology don't have to be complex topics. They can be easy to use but require intuitive platforms and simple user experiences. The EU's digital policies offer a strong foundation for integrating blockchain. If blockchain becomes part of the initial rulemaking, it could fuel more widespread adoption. There's a long way to go before people feel confident understanding concepts like DIDs. 


Cloud providers aren’t delivering on security promises

With 44% of businesses already spending between £101,000 and £250,000 on cloud migrations in the past 12 months there is a clear need for organizations to ensure they are working with trusted partners who can meet this security need. Otherwise, companies will run the risk of having to spend more to not only move to new suppliers but also respond to the cost of a data breach. The cost and resources needed for organizations to boost their own security skills and technology is often too prohibitive. ... However, despite the clear advantages to security and job stability, only 22% of CISOs use a channel partner in their cloud migration process. This is leaving many exposed to unnecessary risk from attacks or job loss. “It is clear that many organizations are struggling when it comes to securing cloud environments. A combination of underdelivering cloud providers and a lack of in-house skills is resulting in a dangerous situation which can leave valuable company data exposed to risk. Simply adding more technology will not solve this problem,” said Clare Loveridge, VP and GM EMEA at Arctic Wolf. “Securing the cloud is a shared responsibility between the cloud provider and the organization. While cloud providers offer good security tools it is important that you have a team of security experts to help you run the operation. 


CISOs are taking on ever more responsibilities and functional roles – has it gone too far?

“The CISO role has expanded significantly over the years as companies realize that information security has a unique picture of what is going on across the organization,” says Doug Kersten, CISO of software company Appfire. “Traditionally, CISOs have focused on fundamental security controls and threat mitigation,” he adds. “However, today they are increasingly expected to play a central role in maintaining business resilience and compliance. Many CISOs are now responsible for risk management, business continuity, and disaster recovery as well as overseeing regulatory compliance across various jurisdictions.” ... “We’re seeing a convergence of roles under head of security because of the background and problem-solving skills of these people. They have become problem-solver in chief,” says Steve Martano, IANS Research faculty and executive cyber recruiter at Artico Search. That, though, comes with challenges. “CISOs are already experiencing high levels of stress, with recent data highlighting that nearly one in four CISOs are considering leaving the profession due to stress,” Kersten says. “Many CISOs only stay in the role for two to three years. With this, the expectations placed on CISOs are undeniably growing, and organizations risk overburdening them without sufficient resources and support. ..."


Fixing the Fixing Process: Why Automation is Key to Cybersecurity Resilience

Cybersecurity environments have seen nonstop evolution, driven by increasingly sophisticated attack techniques, the expansion of complex cloud-native architecture, and the rise of AI-powered threats that outpace traditional defense strategies. At the same time, development timelines have accelerated, pushing security teams to keep pace without becoming a bottleneck. ... It’s a daunting and intimidating task that requires sufficient time and attention. Moreover, adopting automation means ensuring that security and development teams trust the outputs. Many organizations struggle with this transition because automation tools, if not properly configured, can generate inaccuracies or miss critical context. Security teams fear losing control over decision-making, while developers worry about receiving even more noise if automation isn’t fine-tuned. ... Attackers are already leveraging AI to exploit vulnerabilities rapidly, while security teams often rely on static and manual processes that have no chance of keeping up. AI-enabled EAPs help teams proactively identify and mitigate vulnerabilities before adversaries can exploit them. By automating exposure assessments, organizations can shrink the reconnaissance window available to attackers, limiting their ability to target common vulnerabilities and exposures (CVEs), security misconfigurations, software flaws, and other weaknesses. 


Can we make AI less power-hungry? These researchers are working on it.

Two key drivers of that efficiency were the increasing adoption of GPU-based computing and improvements in the energy efficiency of those GPUs. “That was really core to why Nvidia was born. We paired CPUs with accelerators to drive the efficiency onward,” said Dion Harris, head of Data Center Product Marketing at Nvidia. In the 2010–2020 period, Nvidia data center chips became roughly 15 times more efficient, which was enough to keep data center power consumption steady. ... The increasing power consumption has pushed the computer science community to think about how to keep memory and computing requirements down without sacrificing performance too much. “One way to go about it is reducing the amount of computation,” said Jae-Won Chung, a researcher at the University of Michigan and a member of the ML Energy Initiative. One of the first things researchers tried was a technique called pruning, which aimed to reduce the number of parameters. Yann LeCun, now the chief AI scientist at Meta, proposed this approach back in 1989, terming it (somewhat menacingly) “the optimal brain damage.” You take a trained model and remove some of its parameters, usually targeting the ones with a value of zero, which add nothing to the overall performance. 


Five Years of Cloud Innovation: 2020 to 2025

The FinOps organization and the implementation of FinOps standards across cloud providers has been the most impactful development over the last five years, states Allen Brokken, head of customer engineering at Google, in an online interview. This has fundamentally transformed how organizations understand the business value of their cloud deployments, he states. "Standardization has enabled better comparisons between cloud providers and created a common language for technical teams, business unit owners, and CFOs to discuss cloud operations." ... The public cloud has democratized access to technology and increased accessibility for organizations across industries that have faced intense volatility and change in the past five years, Adams observes via email. "This innovation has facilitated a new level of co-innovation and enabled new business models that allow companies to realize future opportunities with ease." Public cloud platforms offer adopters immense benefits, Adams says. "With the public cloud, businesses can scale IT infrastructure on-demand without significant upfront investment." This flexibility comes with a reduced total cost of ownership, since public cloud solutions often lead to lower costs for hardware, software and maintenance. 


Cloud, colocation or on-premise? Consider all options

Following the rush to the cloud, the cost implications should have prompted some companies to move back to on-premise, but it hasn’t, according to Lamb. “I thought it might happen with AI, because potentially the core per hour rate for AI is going to be far higher, but it hasn’t.” Lamb’s advice for CIOs is to be wary of being tied into particular providers or AI models, noting that Microsoft is creating models and not charging for them, knowing that companies will still be paying for the compute to use them. Lamb also says that, whether we’re talking on-premise, colocation or cloud, the potential for retrofitting existing capacity is limited, at least when it comes to capacity aimed at AI. After all, those GPUs often require liquid cooling to the chip. This changes the infrastructure equation, says Lamb, increasing the footprint for cooling infrastructure in comparison to compute. Quite apart from the real estate impact, this isn’t something most enterprises will want to tackle. Also, cooling and power will only become more complicated. Andrew Bradner, Schnieder Electric’s general manager for cooling, is confident that many sectors will continue to operate on-premise datacentre capacity – life sciences, fintech and financial, for example. 


How GenAI is Changing Work by Supporting, Not Replacing People

A common misconception is that AI adoption leads to workforce reduction. While automation has historically replaced repetitive, manual labor, the rise of GenAI is fundamentally different. Unlike traditional automation, which replaces human effort, GenAI amplifies human potential by reducing workload friction. The same science study reinforces this point: AI doesn’t just increase speed; it also improves work quality. Employees using AI-powered tools experienced a 40% reduction in task completion time and an 18% improvement in output quality, demonstrating that AI is an efficiency enabler rather than a job replacer. Consider the historical trend: The Industrial Revolution automated factory work but also created entirely new job categories and industries. Similarly, the digital revolution reduced the need for clerical roles yet generated millions of jobs in software development, cybersecurity, and IT infrastructure. ... Biases in machine learning models are still an issue since AI based on data from the past will perpetuate prevailing biases, and thus human monitoring is critical. GenAI can also generate misleading or inaccurate results, further highlighting the need for oversight. AI can generate reports, but it cannot negotiate deals, understand organizational culture, or make leadership decisions. 


Frankenstein Fraud: How to Protect Yourself Against Synthetic Identity Fraud

Synthetic identity fraud is an exercise in patience, at least on the criminal's part, especially if they're using the Social Security number of a child. The identity is constructed by using a real Social Security number in combination with an unassociated name, address, date of birth, phone number or other piece of identifying information to create a new "whole" identity. Criminals can purchase SSNs on the dark web, steal them from data breaches or con them from people through things like phishing attacks and other scams. Synthetic identity theft flourishes because of a simple flaw in the US financial and credit system. When the criminal uses the synthetic identity to apply to borrow from a lender, it's typically denied credit because there's no record of that identity in their system. The thieves are expecting this since children and teens may have no credit or a thin history, and elderly individuals may have poor credit scores. Once an identity applies for an account and is presented to a credit bureau, it's shared with other credit bureaus. That act is enough to allow credit bureaus to recognize the synthetic identity as a real person, even if there's little activity or evidence to support that it's a real person. Once the identity is established, the fraudsters can start borrowing credit from lenders.


Will AI erode IT talent pipelines?

“The pervasive belief that gen AI is an automation technology, that gen AI increases productivity by automation, is a huge fallacy,” says Suda, though he admits it will eliminate the need for certain skills — including IT skills. “Losing skills is fine,” he says, adding that machines have been eliminating the need for certain skills for centuries. “What gen AI is helping us do is learn new skills and learn new things, and that does create an impact on the workforce. “What it is eroding is the opportunity for junior IT staff to have the same experiences that junior staff have today or yesterday,” he says. “Therefore, there’s an erosion of yesterday’s talent pipeline. Yesterday’s talent pipeline is changing, and the steps to get through it are changing from what we have today to what we need [in the future].” Steven Kirz, senior partner for operations excellence at consulting firm West Monroe, shares similar insights. Like Suda, Kirz says AI doesn’t “universally make everybody more productive. It’s unequal across roles and activities.” Kirz also says both research and anecdotal evidence show that AI is replacing lower-level, mundane, and repetitive tasks. In IT, that tends to be reporting, clerical, data entry, and administrative activities. “And routine roles being replaced [by technology] doesn’t feel new to me,” he adds.


Daily Tech Digest - March 23, 2025


Quote for the day:

"Law of Leadership: A successful team with 100 members has 100 leaders." -- Lance Secretan


Citizen Development: The Wrong Strategy for the Right Problem

The latest generation of citizen development offenders are the low-code and no-code platforms that promise to democratize software development by enabling those without formal programming education to build applications. These platforms fueled enthusiasm around speedy app development — especially among business users — but their limitations are similar to the generations of platforms that came before. ... Don't get me wrong — the intentions behind citizen development come from a legitimate place. More often than not, IT needs to deliver faster to keep up with the business. But these tools promise more than they can deliver and, worse, usually result in negative unintended consequences. Think of it as a digital house of cards, where disparate apps combine to create unscalable systems that can take years and/or millions of dollars to fix. ... Struggling to keep up with business demands is a common refrain for IT teams. Citizen development has attempted to bridge the gap, but it typically creates more problems than solutions. Rather than relying on workarounds and quick fixes that potentially introduce security risks and inefficiency — and certainly rather than disintermediating IT — businesses should embrace the power of GenAI to support their developers and ultimately to make IT more responsive and capable.


Researchers Test a Blockchain That Only Quantum Computers Can Mine

The quantum blockchain presents a path forward for reducing the environmental cost of digital currencies. It also provides a practical incentive for deploying early quantum computers, even before they become fully fault-tolerant or scalable. In this architecture, the cost of quantum computing — not electricity — becomes the bottleneck. That could shift mining centers away from regions with cheap energy and toward countries or institutions with advanced quantum computing infrastructure. The researchers also argue that this architecture offers broader lessons. ... “Beyond serving as a proof of concept for a meaningful application of quantum computing, this work highlights the potential for other near-term quantum computing applications using existing technology,” the researchers write. ... One of the major limitations, as mentioned, is cost. Quantum computing time remains expensive and limited in availability, even as energy use is reduced. At present, quantum PoQ may not be economically viable for large-scale deployment. As progress continues in quantum computing, those costs may be mitigated, the researchers suggest. D-Wave machines also use quantum annealing — a different model from the quantum computing platforms pursued by companies like IBM and Google. 


Enterprise Risk Management: How to Build a Comprehensive Framework

Risk objects are the human capital, physical assets, documents and concepts (e.g., “outsourcing”) that pose risk to an organization. Stephen Hilgartner, a Cornell University professor, once described risk objects as “sources of danger” or “things that pose hazards.” The basic idea is that any simple action, like driving a car, has associated risk objects – such as the driver, the car and the roads. ... After the risk objects have been defined, the risk management processes of identification, assessment and treatment can begin. The goal of ERM is to develop a standardized system that not only acknowledges the risks and opportunities in every risk object but also assesses how the risks can impact decision-making. For every risk object, hazards and opportunities must be acknowledged by the risk owner. Risk owners are the individuals managerially accountable for the risk objects. These leaders and their risk objects establish a scope for the risk management process. Moreover, they ensure that all risks are properly managed based on approved risk management policies. To complete all aspects of the risk management process, risk owners must guarantee that risks are accurately tied to the budget and organizational strategy.


Choosing consequence-based cyber risk management to prioritize impact over probability, redefine industrial security

Nonetheless, the biggest challenge for applying consequence-based cyber risk management is the availability of holistic information regarding cyber events and their outcomes. Most companies struggle to gauge the probable damage of attacks based on inadequate historical data or broken-down information systems. This has led to increased adoption of analytics and threat intelligence technologies to enable organizations to simulate the ‘most likely’ outcome of cyber-attacks and predict probable situations. ... “A winning strategy incorporates prevention and recovery. Proactive steps like vulnerability assessments, threat hunting, and continuous monitoring reduce the likelihood and impact of incidents,” according to Morris. “Organizations can quickly restore operations when incidents occur with robust incident response plans, disaster recovery strategies, and regular simulation exercises. This dual approach is essential, especially amid rising state-sponsored cyberattacks.” ... “To overcome data limitations, organizations can combine diverse data sources, historical incident records, threat intelligence feeds, industry benchmarks, and expert insights, to build a well-rounded picture,” Morris detailed. “Scenario analysis and qualitative assessments help fill in gaps when quantitative data is sparse. Engaging cross-functional teams for continuous feedback ensures these models evolve with real-world insights.”


The CTO vs. CMO AI power struggle - who should really be in charge?

An argument can be made that the CTO should oversee everything technical, including AI. Your CTO is already responsible for your company's technology infrastructure, data security, and system reliability, and AI directly impacts all these areas. But does that mean the CTO should dictate what AI tools your creative team uses? Does the CTO understand the fundamentals of what makes good content or the company's marketing objectives? That sounds more like a job for your creative team or your CMO. On the other hand, your CMO handles everything from brand positioning and revenue growth to customer experiences. But does that mean they should decide what AI tools are used for coding or managing company-wide processes or even integrating company data? You see the problem, right? ... Once a tool is chosen, our CTO will step in. They perform their due diligence to ensure our data stays secure, confidential information isn't leaked, and none of our secrets end up on the dark web. That said, if your organization is large enough to need a dedicated Chief AI Officer (CAIO), their role shouldn't be deciding AI tools for everyone. Instead, they're a mediator who connects the dots between teams. 


Why Cyber Quality Is the Key to Security

To improve security, organizations must adopt foundational principles and assemble teams accountable for monitoring safety concerns. Cyber resilience and cyber quality are two pillars that every institution — especially at-risk ones — must embrace. ... Do we have a clear and tested cyber resilience plan to reduce the risk and impact of cyber threats to our business-critical operations? Is there a designated team or individual focused on cyber resilience and cyber quality? Are we focusing on long-term strategies, targeted at sustainable and proactive solutions? If the answer to any of these questions is no, something needs to change. This is where cyber quality comes in. Cyber quality is about prioritization and sustainable long-term strategy for cyber resilience, and is focused on proactive/preventative measures to ensure risk mitigation. This principle is not a marked checkbox on controls that show very little value in the long run. ... Technology alone doesn't solve cybersecurity problems — people are the root of both the challenges and the solutions. By embedding cyber quality into the core of your operations, you transform cybersecurity from a reactive cost center into a proactive enabler of business success. Organizations that prioritize resilience and proactive governance will not only mitigate risks but thrive in the digital age. 


ISO 27001: Achieving data security standards for data centers

Achieving ISO 27001 certification is not an overnight process. It’s a journey that requires commitment, resources, and a structured approach in order to align the organization’s information security practices with the standard’s requirements. The first step in the process is conducting a comprehensive risk assessment. This assessment involves identifying potential security risks and vulnerabilities in the data center’s infrastructure and understanding the impact these risks might have on business operations. This forms the foundation for the ISMS and determines which security controls are necessary. ... A crucial, yet often overlooked, aspect of ISO 27001 compliance is the proper destruction of data. Data centers are responsible for managing vast amounts of sensitive information and ensuring that data is securely sanitized when it is no longer needed is a critical component of maintaining information security. Improper data disposal can lead to serious security risks, including unauthorized access to confidential information and data breaches. ... Whether it's personal information, financial records, intellectual property, or any other type of sensitive data, the potential risks of improper disposal are too great to ignore. Data breaches and unauthorized access can result in significant financial loss, legal liabilities, and reputational damage.


Understanding code smells and how refactoring can help

Typically, code smells stem from a failure to write source code in accordance with necessary standards. In other cases, it means that the documentation required to clearly define the project's development standards and expectations was incomplete, inaccurate or nonexistent. There are many situations that can cause code smells, such as improper dependencies between modules, an incorrect assignment of methods to classes or needless duplication of code segments. Code that is particularly smelly can eventually cause profound performance problems and make business-critical applications difficult to maintain. It's possible that the source of a code smell may cause cascading issues and failures over time. ... The best time to refactor code is before adding updates or new features to an application. It is good practice to clean up existing code before programmers add any new code. Another good time to refactor code is after a team has deployed code into production. After all, developers have more time than usual to clean up code before they're assigned a new task or a project. One caveat to refactoring is that teams must make sure there is complete test coverage before refactoring an application's code. Otherwise, the refactoring process could simply restructure broken pieces of the application for no gain. 


Handling Crisis: Failure, Resilience And Customer Communication

Failure is something leaders want to reduce as much as they can, and it’s possible to design products with graceful failure in mind. It’s also called graceful degradation and can be thought of as a tolerance to faults or faulting. It can mean that core functions remain usable as parts or connectivity fails. You want any failure to cause as little damage or lack of service as possible. Think of it as a stopover on the way to failing safely: When our plane engines fail, we want them to glide, not plummet. ... Resilience requires being on top of it all: monitoring, visibility, analysis and meeting and exceeding the SLAs your customers demand. For service providers, particularly in tech, you can focus on a full suite of telemetry from the operational side of the business and decide your KPIs and OKRs. You can also look at your customers’ perceptions via churn rate, customer lifetime value, Net Promoter Score and so on. ... If you are to cope with the speed and scale of potential technical outages, this is essential. Accuracy, then speed, should be your priorities when it comes to communicating about outages. The more of both, the better, but accuracy is the most important, as it allows customers to make informed choices as they manage the impact on their own businesses.


Approaches to Reducing Technical Debt in Growing Projects

Technical debt, also known as “tech debt,” refers to the extra work developers incur by taking shortcuts or delaying necessary code improvements during software development. Though sometimes these shortcuts serve a short-term goal — like meeting a tight release deadline — accumulating too many compromises often results in buggy code, fragile systems, and rising maintenance costs. ... Massive rewrites can be risky and time-consuming, potentially halting your roadmap. Incremental refactoring offers an alternative: focus on high-priority areas first, systematically refining the codebase without interrupting ongoing user access or new feature development. ... Not all parts of your application contribute to technical debt equally. Concentrate on elements tied directly to core functionality or user satisfaction, such as payment gateways or account management modules. Use metrics like defect density or customer support logs to identify “hotspots” that accumulate excessive technical debt. ... Technical debt often creeps in when teams skip documentation, unit tests, or code reviews to meet deadlines. A clear “definition of done” helps ensure every feature meets quality standards before it’s marked complete.

Daily Tech Digest - March 21, 2025


Quote for the day:

"A leader is one who knows the way, goes the way, and shows the way." -- John C. Maxwell



Synthetic data and the risk of ‘model collapse’

There is a danger of an ‘ouroboros’ here, or a snake eating its own tail. Models can be ‘poisoned’ with data that is passed on in addition to malicious prompts. While usually caused by sabotage, this can also be unintentional: AI models sometimes hallucinate, including when they are generating data for their LLM descendant. With enough ongoing errors, a new LLM risks performing worse than its predecessors. At its core, it’s a simple case of garbage in, garbage out. The logical end state is a total ‘model collapse‘, where drivel overtakes anything factual and makes an LLM dysfunctional. Should this happen (and it may have happened with GPT-4.5), AI model makers are forced to pull back to an earlier checkpoint, reassess their data or be forced to make architectural changes. ... In short, a high degree of expertise is required for each step in the AI process. Currently, attention is focused on the initial building of the foundation models on the one hand and the actual implementation of GenAI on the other. The importance of training data was touched upon in 2023 because online organizations regularly felt robbed. In essence: it made headlines, which is why we all became aware of the intricacies of training data. Now that the flow of online retrievable data is ending, AI players are grasping for an alternative that is creating new problems.


Automated Workflow Perfection Is a Job in Itself

“The fragmented nature of automation – spanning robotic process automation, business process management, workflow tools and AI-powered solutions all further complicates consistent measurement,” lamented Gaudette. “Market segment overlap presents another challenge. As technologies increasingly converge, traditional category boundaries blur. A document processing solution might be classified under workflow automation by one analyst and digital process automation by another, creating inconsistent market size calculations.” Other survey “findings” from Custom Workflows’ analysis report suggest that the integration of artificial intelligence with traditional automation represents a particularly powerful growth catalyst. McKinsey’s own analysis reveals that while basic automation delivers 20-30% cost reductions, intelligent automation incorporating AI can achieve 50-70% savings while simultaneously improving quality and customer experience. ... As the market for workflow automation now goes into what we might call an amplified state of flux, it appears that current automation adoption follows a classic bell curve distribution, with most organizations clustered in the middle stages of implementation maturity. Surprisingly, smaller organizations often outperform their larger counterparts when it comes to automation success. 


The hidden risk in SaaS: Why companies need a digital identity exit strategy

To reduce dependency on external SaaS providers, organizations should consider taking back control of their digital identity infrastructure. This doesn’t mean abandoning cloud services altogether, but rather strategically deploying identity management solutions that provide ownership and portability. Self-hosted identity solutions running on private cloud or on-premises environments can offer greater control. Businesses should also consider multi-cloud identity architectures allowing authentication and access control to function across different cloud providers.  ... Organizations must closely monitor data sovereignty laws and adjust their infrastructure accordingly. Ensuring that identity solutions comply with shifting regulations will help avoid legal and operational risks. To avoid being caught off guard, it’s important for IT teams to understand what’s going on behind the scenes rather than entirely outsourcing their infrastructure. For the highest level of preparedness, organizations can manage identity infrastructure systems themselves, reducing reliance on third party SaaS companies for critical functions. If teams understand the inner workings of their identity management, they will be better placed to develop an emergency response plan with predefined steps to transition services in case of sudden geopolitical changes.


Why Your Business Needs an AI Innovation Unit

An AI innovation unit should always support sustainable and strategic organizational growth through the ethical and impactful application and integration of AI, McDonagh-Smith says. "Achieving this mission involves identifying and deploying AI technologies to solve complex and simple business problems, improving efficiency, cultivating innovation, and creating measurable new organizational value." A successful unit, McDonagh-Smith states, prioritizes aligning AI initiatives with the enterprise's long-term vision, ensuring transparency, fairness, and accountability in its AI applications. ... An AI innovation unit leader is foremost a business leader and visionary, responsible for helping the enterprise embrace and effectively use AI in an ethical and responsible manner, Hall says. "The leader needs to understand the risk and concerns, but also AI governance and frameworks." He adds that the leader should also be realistic and inspiring, with an understanding of the hype curve and the technology's potential. ... An AI innovation unit requires a collaborative culture that bridges silos within the organization and commits to continuous reflection and learning, McDonagh-Smith says. "The unit needs to establish practical partnerships with academic institutions, tech startups, and AI thought leadership groups to create flows of innovation, intelligence, and business insights."


How to avoid the AI complexity trap

When done right, AI enables simplicity, cutting across layers of complexity -- but with limits. "AI is not a silver bullet," said Richard Demeny, a software development consultant, formerly with Arm. "LLMs under the hood actually use probabilities, not understanding, to give answers. It's humans who design, build, and implement systems, and while AI may automate some entry-level roles and certainly bring significant productivity gains, it cannot replace the amount of practical experience IT decision-makers need to make the right trade-offs." ... To keep both AI and IT complexity at bay, "deployment of AI needs to be thoughtful," said Hashim. "Focus on the simplicity of user experience, quality of AI, and its ability to get things done," she said. "Uplevel all your employees with AI so that your organization as a whole can be more productive and happy." Consistency is the key to managing complexity, Howard said. Platforms, for example, "make things consistent. So you're able to do things -- sometimes very complicated things -- in consistent ways and standard ways that everybody knows how to use them. Even something as simple as definitions or taxonomy. If everybody is speaking the same language, so a simplified taxonomy, then it's much easier to communicate."  


Outsmart the skills gap crisis and build a team without recruitment

Team augmentation involves engaging external software engineers from a partner company to complement an existing in-house team. This approach provides companies with the flexibility to quickly scale their technical resources up or down, depending on the project’s needs, and plug any capability gaps inside their teams. It can be crucial to the success of businesses whose product is software, or relies on software, as it enables businesses to scale their team and projects flexibly without the risks involved with growing an in-house team. ... It allows companies to access a diverse range of skills and expertise that may not be available in-house. Companies can quickly ramp up their technical resources and tackle projects that require specialised skills or knowledge whilst onboarding engineers that can bring fresh ideas and perspectives to the project. Having access to this expertise quickly is often of paramount importance as companies compete to grow. For instance, if a company needs to design, develop, and support a mobile app, but its in-house team lacks the necessary skills and experience, it can quickly engage a team of engineers who specialise in mobile app development to work on the project. This approach can help companies save time and resources and ensure that their projects are completed on time and to a high standard.


Taking AI Commoditization Seriously

Commoditization is the process of products or services becoming “standardized, marketable objects.” Any given unit of a commodity, from corn to crude oil, is generally interchangeable with and sells for the same price as others. Commoditization of frontier models could emerge in a few ways. Perhaps, as Yann LeCun predicts, open-source models could equal or surpass closed-source performance. Or perhaps competing firms continue finding ways to match each other’s developments. Such competition has more above-board variants—top-tier engineers at different firms keeping pace with each other—and less. Consider, for instance, OpenAI’s allegations against DeepSeek of inappropriate copying. ... The emergence of new, decentralized AI threat vectors could offer the powers that be a common enemy. This might present a unique opportunity for US-China collaboration. Modern US-China collaboration has required tangible mutual interest to succeed. The most famous modern US-China agreement, the Nixon/Kissinger-Mao/Zhou normalization of US-China relations, occurred in large part to overcome a perceived common threat in the USSR. When few companies control cutting-edge frontier models, preventing third-party model misuse is comparatively simple. Fewer frontier developers imply fewer sites to monitor for malicious actors. 


Making Architecturally Significant Decisions

Architectural decisions are at the root of our practice but they are often hard to spot. The vast majority of decisions get processed at the team level and do not apply architectural thinking or have an architect involved at all. This approach can be a benefit in agile organizations if managed and communicated effectively. ... Envision an enterprise or company, then imagine all the teams in the organization working in parallel on changes, remember to add in maintenance teams and operations teams doing ‘keep the lights running’ work. ... To effectively manage decisions, the architecture team should put in place a decision management process early in its lifecycle, by making critical investments into how the organization is going to process decision point in the architecture engagement model. During the engagement methodology update and the engagement principles definition, the team will decide what levels of decisions must be exposed in the repository and their limits in duration, quality and effort. These principles will guide the decision methods for the entire team until the next methodology update. There are numerous decision methods and theories in the marketplace in making better decisions. The goal of the architecture decision repository is to ensure that decisions are made clearly, with appropriate tools and with respect for traceability.


What is predictive analytics? Transforming data into future insights

Predictive analytics draws its power from many methods and technologies, including big data, data mining, statistical modeling, ML, and assorted mathematical processes. Organizations use predictive analytics to sift through current and historical data to detect trends, and forecast events and conditions that should occur at a specific time, based on supplied parameters. With predictive analytics, organizations can find and exploit patterns contained within data in order to detect risks and opportunities. Models can be designed, for instance, to discover relationships between various behavior factors. Such models enable the assessment of either the promise or risk presented by a particular set of conditions, guiding informed decision making across various categories of supply chain and procurement events. ... Predictive analytics makes looking into the future more accurate and reliable than previous tools. As such it can help adopters find ways to save and earn money. Retailers often use predictive models to forecast inventory requirements, manage shipping schedules, and configure store layouts to maximize sales. Airlines frequently use predictive analytics to set ticket prices reflecting past travel trends. 


C-Suite Leaders Must Rewire Businesses for True AI Value

AI's true value doesn't come from incremental gains but emerges when workflows are transformed completely. McKinsey found 21% of companies using gen AI have redesigned workflows and seen significant effect on their bottom-line. Morgan Stanley redesigned client interactions by integrating AI-powered assistants. Rather than just automating document retrieval, the company embedded AI into workflows, enabling advisers to generate customized reports and insights in real time. This improved efficiency and enhanced customer experience through more data-driven, personalized interactions. Boston Consulting Group highlighted that companies embedding AI into core business workflows report 40% higher process efficiency and 25% faster output. For CIOs and AI leaders, this highlights a crucial point. Deploying AI without rethinking workflows resembles putting a turbo engine in a low-end car. The real competitive advantage comes from integrating AI into the fabric of business operations and not in standalone tasks. ... AI is becoming a core function that enhances decision-making, automates tasks and drives innovation. McKinsey's report emphasized that AI's biggest value lies in large-scale transformation, not isolated use cases. 

Daily Tech Digest - March 20, 2025


Quote for the day:

"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford



Agentic AI — What CFOs need to know

Agentic AI takes efficiency to the next level as it builds on existing AI platforms with human-like decision-making, relieving employees of monotonous routine tasks, allowing them to focus on more important work. CFOs will be happy to know that like other forms of AI, agentic is scalable and flexible. For example, organizations can build it into customer-facing applications for a highly customized experience or sophisticated help desk. Or they could embed agentic AI behind the scenes in operations. ... Not surprisingly, like other emerging technologies, agentic AI requires thoughtful and strategic implementation. This means starting with process identification and determining which specific process or functions are suitable for agentic AI. Business leaders also need to determine organizational value and impact and find ways to evaluate and measure to ensure the technology is delivering clear benefits. Companies should also be mindful of team composition, and, if necessary, secure external experts to ensure successful implementation. Beyond the technical feasibility, there are other considerations such as data security. For now, CFOs and other business leaders need to wrap their heads around the concept of “agents” and keep their minds open to how this powerful technology can best serve the needs of their organization. 


5 pitfalls that can delay cyber incident response and recovery

For tabletop exercises to be truly effective they must have internal ownership and be customized to the organization. CISOs need to ensure that tabletops are tailored to the company’s specific risks, security use cases and compliance requirements. Exercises should be run regularly (quarterly, at a minimum) and evaluated with a critical eye to ensure that outcomes are reflected in the company’s broader incident response plan. ... One of the most common failures in incident response is a lack of timely information sharing. Key stakeholders, including HR, PR, Legal, executives and board members must be kept informed about the situation in real time. Without proper communication channels and predefined reporting structures, misinformation or delays can lead to confusion, prolonged downtime and even regulatory penalties for failure to report incidents within required timeframes. CISOs are responsible for proactively establishing clear communication protocols and ensuring that all responders and stakeholders understand their role in incident management. ... Out-of-band communication capabilities are critical for safeguarding response efforts and shielding them from an attacker’s view. Organizations should establish secure, independent channels for coordinating incident response that aren’t tied to corporate networks. 


Bringing Security to Digital Product Design

We are aware that prioritizing security is a common challenge. Even though it is a critical issue, most leaders behind the development of new products are not interested in prioritizing this type of matter. Whenever possible, they try to focus the team's efforts on features. For this reason, there is often no room for this type of discussion. So what should we do? Fortunately, there are multiple possible solutions. One way to approach the topic is to take advantage of the opportunity of a collaborative and immersive session such as product discovery. ... Usually, in a product discovery session, there is a proposed activity to map personas. To map this kind of behavior, I recommend using the same persona model that is suggested. From there, go deeper into hostility characteristics in sections such as bio, objectives, interests, and frustrations, as in the figure above. After the personas have been described, it is important to deepen the discussion by mapping journeys. The goal here is to identify actions and behaviors that provide ideas on how to correctly deal with threats. Remember that when using an assailant actor, the materials should be written from its perspective. ... Complementing the user journey with likely attacker actions is another technique that helps software development teams map, plan, and address security as early as possible. 


From Cloud Native to AI Native: Lessons for the Modern CISO to Win the Cybersecurity Arms Race

Today, CISOs stand at another critical crossroads in security operations: the move from a “Traditional SOC” to an “AI Native SOC.” In this new reality, generative AI, machine learning and large-scale data analytics power the majority of the detection, triage and response tasks once handled by human analysts. Like Cloud Native technology before it, AI Native security methods promise profound efficiency gains but also necessitate a fundamental shift in processes, skillsets and organizational culture.  ... For CISOs, transitioning to an AI Native SOC represents a massive opportunity—akin to how CIOs leveraged DevOps and cloud-native to gain a competitive edge:  Strategic Perspective: CISOs must look beyond tool selection to organizational and cultural shifts. By championing AI-driven security, they demonstrate a future-ready mindset—one that’s essential for keeping up with advanced adversaries and board-level expectations around cyber resilience.  Risk Versus Value Equation: Cloud-native adoption taught CIOs that while there are upfront investments and skill gaps, the long-term benefits—speed, agility, scalability—are transformative. In AI Native security, the same holds true: automation reduces response times, advanced analytics detect sophisticated threats and analysts focus on high-value tasks.  


Europe slams the brakes on Apple innovation in the EU

With its latest Digital Markets Act (DMA) action against Apple, the European Commission (EC) proves it is bad for competition, bad for consumers, and bad for business. It also threatens Europeans with a hitherto unseen degree of data insecurity and weaponized exploitation. The information Apple is being forced to make available to competitors with cynical interest in data exfiltration will threaten regional democracy, opening doors to new Cambridge Analytica scandals. This may sound histrionic. And certainly, if you read the EC’s statement detailing its guidance to “facilitate development of innovative products on Apple’s platforms” you’d almost believe it was a positive thing. ... Apple isn’t at all happy. In a statement, it said: “Today’s decisions wrap us in red tape, slowing down Apple’s ability to innovate for users in Europe and forcing us to give away our new features for free to companies who don’t have to play by the same rules. It’s bad for our products and for our European users. We will continue to work with the European Commission to help them understand our concerns on behalf of our users.” There are several other iniquitous measures contained in Europe’s flawed judgement. For example, Apple will be forced to hand over access to innovations to competitors for free from day one, slowing innovation. 


The Impact of Emotional Intelligence on Young Entrepreneurs

The first element of emotional intelligence is self-awareness which means being able to identify your emotions as they happen to understand how they affect your behavior. During the COVID-19 pandemic, I often felt frustrated when my sales went down during the international bookfair. But by practicing self-awareness, I was able to acknowledge the frustration and think about its sources instead of letting it lead to impulsive reactions. Being self-aware helps me to stay in control of  actions and make decisions that align with my values. So the solution back then was to keep pushing sales through my online platform instead of showing up in person as I realized that people were still in lockdown due to the pandemic.   Self-recognition is another important aspect of emotional intelligence. While self-awareness is about recognizing emotions, self-regulation focuses on managing how you respond to them. Self-regulation doesn't mean ignoring your emotions but learning to express them in a constructive way. Imagine a situation where you feel angry after receiving negative feedback. Instead of reacting defensively or shouting, self-recognition allows you to take a step back, consider the feedback calmly, and respond appropriately. 


Bridging the Gap: Integrating All Enterprise Data for a Smarter Future

To bridge the gap between mainframe and hybrid cloud environments, businesses need a modern, flexible, technology-driven strategy — one that ensures they can access, analyze, and act on their data without disruption. Rather than relying on costly, high-risk "rip-and-replace" modernization efforts, organizations can integrate their core transactional data with modern cloud platforms using automated, secure, and scalable solutions capable of understanding and modernizing mainframe data. One of the most effective methods is real-time data replication and synchronization, which enables mainframe data to be continuously updated in hybrid cloud environments in real time. Low-impact change data capture technology recognizes and replicates only the modified portions of datasets, reducing processing overhead and ensuring real-time consistency across both mainframe and hybrid cloud systems. Another approach is API-based integration, which allows organizations to provide mainframe data as modern, cloud-compatible services. This eliminates the need for batch processing and enables cloud-native applications, AI models, and analytics platforms to access real-time mainframe data on demand. API gateways further enhance security and governance, ensuring only authorized systems can interact with sensitive transactional business data.


How CISOs are approaching staffing diversity with DEI initiatives under pressure

“In the end, a diverse, engaged cybersecurity team isn’t just the right thing to build — it’s critical to staying ahead in a rapidly evolving threat landscape,” he says. “To fellow CISOs, I’d say: Stay the course. The adversary landscape is global, and so our perspective should be as well. A commitment to DEI enhances resilience, fosters innovation, and ultimately strengthens our defenses against threats that know no boundaries.” Nate Lee, founder and CISO at Cloudsec.ai, says that even if DEI isn’t a specific competitive advantage — although he thinks diversity in many shapes is — it’s the right thing to do, and “weaponizing it the way the administration has is shameful.” “People want to work where they’re valued as individuals, not where diversity is reduced to checking boxes, but where leadership genuinely cares about fostering an inclusive environment,” he says. “The current narrative tries to paint efforts to boost people up as misguided and harmful, which to me is a very disingenuous argument.” ... “Diverse workforces make you stronger and you are a fool if you [don’t] establish a diverse workforce in cybersecurity. You are at a distinct disadvantage to your adversaries who do benefit from diverse thinking, creativity, and motivations.”


AI-Powered Cyber Attacks and Data Privacy in The Age of Big Data

Artificial intelligence significantly increased the capabilities of attackers to efficiently conduct cyber-attacks. This also increased their intelligence and the scale of the attacks. Compared to the traditional process of cyber-attacks, the attacks driven by AI have the capability to automatically learn, adapt, and develop strategies with a minimum number of human interventions. These attacks proactively utilize the algorithms of machine learning, natural language processing, and deep learning models. They leverage these algorithms in the process of determining and analyzing issues or vulnerabilities, avoiding security and detection systems, and developing phishing campaigns that are believable. ... AI has also significantly increased the intelligence of systems related to malware and autonomous hacking. These systems gained the capabilities to infiltrate networks, leverage the vulnerabilities of the system, and avoid detection systems. Malware driven by AI has the capability to make real-time modifications to its codes, unlike conventional malware. This significantly increases the difficulties in the detection and eradication process for the security software. These difficulties involve infiltration in systems powered by AI, such as polymorphic malware. It can convert its appearance based on the data collected from every attempt of cyber-attack. 


Platform Engineers Must Have Strong Opinions

Many platform engineering teams build internal developer platforms, which allow development teams to deploy their infrastructure with just a few clicks and reduce the number of issues that slow deployments. Because they are designing the underlying application infrastructure across the organization, the platform engineering team must have a strong understanding of their organization and the application types their developers are creating. This is also an ideal point to inject standards about security, data management, observability and other structures that make it easier to manage and deploy large code bases.  ... To build a successful platform engineering strategy, a platform engineering team must have well-defined opinions about platform deployments. Like pizza chefs building curated pizza lists based on expertise and years of pizza experience, the platform engineering team applies its years of industry experience in deploying software to define software deployments inside the organization. The platform engineering team’s experience and opinions guide and shape the underlying infrastructure of internal platforms. They put guardrails into deployment standards to ensure that the provided development capabilities meet the needs of engineering organizations and fulfill the larger organization’s security, observability and maintainability needs.