Daily Tech Digest - January 12, 2025

Data Architecture Trends in 2025

While unstructured data makes up the lion’s share of data in most companies (typically about 80%), structured data does its part to bulk up business’ storage needs. Sixty-four percent of organizations manage at least one petabyte of data, and 41% of organizations have at least 500 petabytes of data, according to the AI & Information Management Report. By 2028, global data creation is projected to grow to more than 394 zettabytes – and clearly enterprises will have more than their fair share of that. Time to open the door to the data lakehouse, which combines the capabilities of data lakes and data warehouses, simplifying data architecture and analytics with unified storage and processing of structured, unstructured, and semi-structured data. “Businesses are increasingly investing in data lakehouses to stay competitive,” according to MarketResearch, which sees the market growing at a 22.9% CAGR to more than $66 billion by 2033. ... “Through 2026, two-thirds of enterprises will invest in initiatives to improve trust in data through automated data observability tools addressing the detection, resolution, and prevention of data reliability issues,” according to Matt Aslett.


How Does a vCISO Leverage AI?

CISOs design and inform policy that shapes security at a company. They inform the priorities of their organizations’ cyberdefense deployment and design, develop, or otherwise acquire the tools needed to achieve the goals they set up. They implement tools and protections, monitor effectiveness, make adjustments, and generally ensure that security functions as desired. However, all that responsibility comes at immense costs, and CISOs are in high demand. It can be challenging to recruit and retain top-level talent for the role, and many smaller or growing organizations—and even some larger older ones—do not employ a traditional, full-time CISO. Instead, they often turn to vCISOs. This is far from a compromise, as vCISOs offer all of the same functionality as their traditional counterparts through an entire team of dedicated service providers rather than a single employee. Since vCISOs are available on a fractional basis, organizations only pay for specific services they need. ... As with all technological breakthroughs, AI is not without its risks and drawbacks. Thankfully, working with a vCISO allows organizations to take advantage of all the benefits of AI while also minimizing its potential downsides. A capable vCISO team doesn’t use AI or any other tool just for the sake of novelty or appearances; their choices are always strategic and risk-informed.


The Transformative Benefits of Enterprise Architecture

Enterprise Architecture review or development is essential for managing complexity, particularly when changes involve multiple systems with intricate interdependencies. ... Enterprise Architecture provides a structured approach to handle these complexities effectively. Often, key stakeholders, such as department heads, project managers, or IT leaders, identify areas of change required to meet new business goals. For example, an IT leader may highlight the need for system upgrades to support a new product launch or a department head might identify process inefficiencies impacting customer satisfaction. These stakeholders are integral to the change process, and the role of the architect is to: Identify and refine the requirements of the stakeholders; Develop architectural views that address concerns and requirements; Highlight trade-offs needed to reconcile conflicting concerns among stakeholders. Without Enterprise Architecture, it is highly unlikely that all stakeholder concerns and requirements will be comprehensively addressed. This can lead to missed opportunities, unanticipated risks, and inefficiencies, such as misaligned systems, redundant processes, or overlooked security vulnerabilities, all of which can undermine business goals and stakeholder trust.


Listen to your technology users — they have led to the most disruptive innovations in history

First, create a culture of open innovation that values insights from outside the organization. While the technical geniuses in your R&D department are experts in how to build something new, they aren’t the only authorities on what it is you should build. Our research suggests that it’s especially important to seek out user-generated disruption at times when customer needs are changing rapidly. Talk to your customers and create channels for dialogue and engagement. Most companies regularly survey users and conduct focus groups. But to identify truly disruptive ideas, you need to go beyond reactions to existing products and plumb unmet needs and pain points. Customer complaints also offer insight into how existing solutions fall short. AI tools make it easier to monitor user communities online and analyze customer feedback, reviews, and complaints. Keep your pulse on social media and online user communities where people share innovative ways to adapt existing products and wish lists for new functionalities. ... Lastly, explore co-creation initiatives that foster direct collaboration with user innovators. For instance, run a contest where customers submit ideas for new products or features, some of which could turn out to be truly disruptive. Or sponsor hackathons that bring together users with needs and technical experts to design solutions.


Guide to Data Observability

Data observability is critical for modern data operations because it ensures systems are running efficiently, detecting anomalies, finding root causes, and actively addressing data issues before they can impact business outcomes. Unlike traditional monitoring, which focuses only on system health or performance metrics, observability provides insights into why something is wrong and allows teams to understand their systems in a more efficient way. In the digital age, where companies rely heavily on data-driven decisions, data observability isn’t only an operational concern but a critical business function. ... When we talk about data observability, we’re focusing on monitoring the data that flows through systems. This includes ensuring data integrity, reliability, and freshness across the lifecycle of the data. It’s distinct from database observability, which focuses more on the health and performance of the databases themselves. ... On the other hand, database observability is specifically concerned with monitoring the performance, health, and operations of a database system—for example, an SQL or MongoDB server. This includes monitoring query performance, connection pools, memory usage, disk I/O, and other technical aspects, ensuring the database is running optimally and serving requests efficiently.


Data maturity and the squeezed middle – the challenge of going from good to great

Breaking through this stagnation does not require a complete overhaul. Instead, businesses can take small but decisive steps. First, they must shift their mindset from seeing data collection as an end in itself, to viewing it as a tool for creating meaningful customer interactions. This means moving beyond static metrics and broad segmentations to dynamic, real-time personalisation. The use of artificial intelligence (AI) can be transformative in this regard. Modern AI tools can analyse customer behaviour in real time, enabling businesses to respond with tailored content, promotions, and experiences. For instance, rather than relying on broad-brush email campaigns, companies can use AI-driven insights to craft (truly) hyper-personalised messages based on individual customer journeys. Such efforts not only improve conversion rates, but also build deeper customer loyalty. ... It’s important to never lose sight of the fact that data maturity is about people and culture as much as tech. Organisations need to foster a culture that values experimentation, learning, and continuous improvement. Behaviourally, this can be uncomfortable for slow-moving or cautious businesses and requires breaking down silos and encouraging cross-functional collaboration. 


Finding a Delicate Balance with AI Regulation and Innovation

The first focus needs to be on protecting individuals and diverse groups from the misuse of AI. We need to ensure transparency when AI is used, which in turn will limit the amount of mistakes and biased outcomes, and when errors are still made, transparency will help rectify the situation. It is also essential that regulation tries to prevent AI from being used for illegal activity, including fraud, discrimination and faking documents and creating deepfake images and videos. It should be a requirement for companies of a certain size to have an AI policy in place that is publicly available for anyone to consult. The second focus should be protecting the environment. Due to the amount of energy needed to train the AI, store the data and deploy the technology ones it’s ready for market, AI innovation comes at a great cost for the environment. It shouldn’t be a zero-sum game and legislation should nudge companies to create AI that is respectful to the our planet. The third and final key focus is data protection. Thankfully there is strong regulation around data privacy and management: the Data Protection Act in the UK and GDPR in the EU are good examples. AI regulation should work alongside existing data regulation and protect the huge steps that have already been taken.


Quantum Machine Learning for Large-Scale Data-Intensive Applications

Quantum machine learning (QML) represents a novel interdisciplinary field that merges principles of quantum computing with machine learning techniques. The foundation of quantum computing lies in the principles of quantum mechanics, which govern the behavior of subatomic particles and introduce phenomena such as superposition and entanglement. These quantum properties enable quantum computers to perform computations probabilistically, offering potential advantages over classical systems in specific computational tasks ... Integrating quantum machine learning (QML) with traditional machine learning (ML) models is an area of active research, aiming to leverage the advantages of both quantum and classical systems. One of the primary challenges in this integration is the necessity for seamless interaction between quantum algorithms and existing classical infrastructure, which currently dominates the ML landscape. Despite the resource-intensive nature of classical machine learning, which necessitates high-speed computer hardware to train state-of-the-art models, researchers are increasingly exploring the potential benefits of quantum computing to optimize and expedite these processes.


Generative Architecture Twins (GAT): The Next Frontier of LLM-Driven Enterprise Architecture

A Generative Architecture Twin (GAT) is a virtual, LLM-coordinated environment that mirrors — and continuously evolves with — your actual production architecture. ... Despite the challenges, Generative Architecture Twins represent an ambitious leap forward. They propose a world where:Architectural decisions are no longer static but evolve with real-time feedback loops. Compliance, security, and performance are integrated from day one rather than tacked on later. EA documentation isn’t a dusty PDF but a living blueprint that changes as the system scales. Enterprises can experiment with high-risk changes in a safe, cost-controlled manner, guided by autonomous AI that learns from every iteration. As we refine these concepts, expect to see the first prototypes of GAT in innovative startups or advanced R&D divisions of large tech enterprises. A decade from now, GAT may well be as ubiquitous as DevOps pipelines are today. Generative Architecture Twins (GAT) go beyond today’s piecemeal LLM usage and envision a closed-loop, AI-driven approach to continuous architectural design and validation. By combining digital twins, neuro-symbolic reasoning, and ephemeral simulation environments, GAT addresses long-standing EA challenges like stale documentation, repetitive compliance overhead, and costly rework.


Is 2025 the year of (less cloud) on-premises IT?

For an external view here outside of OWC, Vadim Tkachenko, technology fellow and co-founder at Percona thinks that whether or not we’ll see a massive wave of data repatriation take place in 2025 is still hard to say. “However, I am confident that it will almost certainly mark a turning point for the trend. Yes, people have been talking about repatriation off and on and in various contexts for quite some time. I firmly believe that we are facing a real inflection point for repatriation where the right combination of factors will come together to nudge organisations towards bringing their data back in-house to either on-premises or private cloud environments which they control, rather than public cloud or as-a-Service options,” he said. Tkachenko further states that companies across the private sector (and tech in particular) are tightening their purse strings considerably. “We’re also seeing more work on enhanced usability, ease of deployment, and of course, automation. The easier it becomes to deploy and manage databases on your own, the more organizations will have the confidence and capabilities needed to reclaim their data and a sizeable chunk of their budgets,” said the Percona man. It turns out then, cloud is still here and on-premises is still here and… actually, a hybrid world is typically the most prudent route to go down.



Quote for the day:

"The greatest leaders mobilize others by coalescing people around a shared vision." -- Ken Blanchard

Daily Techj Digest - January 11, 2025

Managing Third-Party Risks in the Software Supply Chain

The myriad of third party risks such as, compromised or faulty software updates, insecure hardware or software components and insufficient security practices, expand the attack surface of the organization. A security breach in one such third party entity can ripple through and potentially lead to significant operational disruptions, financial losses and reputational damage to the organization. In view of this, securing not just their own organizations, but also the intricate web of suppliers, vendors and partners that make up their cyber supply chain is not just an option, but a necessity. It is needless to state that managing the third party risks is becoming a big challenge for the Chief Information Security Officers. More to it, it may not just be enough to maanage third-party risks but also fourth party risks as well. ... Mapping your most critical third-party relationships can identify weak links across your extended enterprise. But to be effective, it needs to go beyond third parties. In many cases, risks are often buried within complex subcontracting arrangements and other relationships, within both your supply chain and vendor partnerships. Illuminating your extended network to see beyond third parties is critical to assessing, mitigating and monitoring the risks posed by sub-tier suppliers.


6G, AI and Quantum: Shaping the Future of Connectivity, Computing and Security

Beyond 6G, another transformative technology that will reshape industries in 2025 is quantum computing. This isn’t just about faster processing; it’s about tackling problems that are currently intractable for even the most powerful conventional systems. Think of the implications for AI training itself – imagine feeding massive, complex datasets into quantum-powered algorithms. The potential for breakthroughs in AI research and development is immense. This next-gen computational power is expected to solve complex problems that were previously deemed unsolvable, ushering in a new era of innovation and efficiency. The impact of these developments will be felt in a range of industries such as pharmaceuticals, cryptography and supply chains. For instance, in the pharmaceutical sector, quantum computing is set to speed up drug discovery. ... The rise of distributed cloud models and edge computing will also speed up services and provide value and innovation – placing cloud technology at the centre of every organisation’s strategic roadmap. Leveraging cloud infrastructure allows businesses to rapidly scale AI models, process enormous volumes of data in real-time, and generate actionable insights that facilitate intelligent decision-making. 


Advancing Platform Accountability: The Promise and Perils of DSA Risk Assessments

Multiple risk assessments fail to meaningfully consider risks related to problematic and harmful use and the design or functioning of their service and systems. Facebook’s 2024 risk assessment assesses physical and mental wellbeing in a crosscutting way but does not meaningfully consider risks related to excessive use or addiction. Other assessments more centrally consider physical and mental well-being risks. ... Snap’s risk assessment devotes seven pages to physical and mental well-being risks, but the assessment fails to consider how platform design could contribute to physical and mental well-being risks by incentivizing problematic or harmful use. Snap’s assessment is broadly focused on risks related to harmful content. The assessment describes mitigations to reduce the prevalence of such content that could impact physical and mental well-being – including auto-moderating for abusive content or ensuring recommender systems do not recommend violative content. This, of course, is important. However, the risk assessment and review of mitigations place almost no emphasis on risks of excessive use actually driven by Snap’s design. Snap’s focus on ephemeral content is presented as only a benefit – “conversations on Snapchat delete by default to reflect real-life conversations.”


Hard and Soft Skills Go Hand-in-Hand — These Are the Ones You Need to Sharpen This Year

To most effectively harness the power of AI in 2025, leaders need to understand it. DataCamp's Matt Crabtree describes AI literacy, at its most basic, as having the skills and competencies required to use AI technologies and applications effectively. But it's much more than that: Crabtree points out that AI literacy is also about enabling people to make informed decisions about how they're using AI, understand the implications of those uses and navigate the ethical considerations they present. For leaders, that means understanding biases that remain embedded in AI systems, privacy concerns, and the need for transparency and accountability. Say you're looking to integrate AI into your hiring process, as we have at my company, Jotform. It's important to understand that while it can be used for tasks like scheduling interviews, screening resumes for objective criteria or helping to organize candidate information, it should not be making hiring decisions for you. AI still has a significant bias problem, in addition to the many other ways in which it lacks the soft skills required for certain, human-only tasks. AI literacy is about understanding its shortcomings and navigating them in a way that is fair and equitable.


The Tech Blanket: Building a Seamless Tech Ecosystem

The days of disconnected platforms are over. In 2025, businesses will embrace platform interoperability to ensure that knowledge and data flow seamlessly across departments. Think of your organization’s technology as a woven blanket—each tool and system represents a thread that, when tightly interwoven, creates a strong, cohesive layer of support that covers your entire company. ... Building a seamless ecosystem begins with establishing a framework for managing distributed information. By creating a Knowledge Asset Center of Excellence, organizations can define norms for how data and knowledge are shared and governed. This approach fosters collaboration while allowing teams the flexibility to work in ways that suit their unique needs. ... As platforms become more interconnected, ensuring robust security becomes critical. Data breaches or inaccuracies in one tool can ripple across the ecosystem, creating significant risks. Leaders must prioritize tools with advanced security features, such as encryption and role-based access controls, to protect sensitive information while maintaining seamless interoperability. Strong data governance policies are also essential. By continuously monitoring data flow and usage, organizations can safeguard the integrity of their knowledge assets while promoting responsible collaboration.


WebAssembly and Containers’ Love Affair on Kubernetes

WebAssembly is showing promise on Kubernetes thanks to the fact that WebAssembly now meets the OCI registry standard as OCI artifacts. This enables Wasm to meet the Kubernetes standard and the OCI standard for containerization, specifically the OCI artifact format. It also involves compatibility with Kubernetes pods, storage interfaces and more. In that respect, it’s one step toward using Wasm as an alternative to containers. Additionally, through containerd, WebAssembly components can be distributed side by side with containers in Kubernetes environments. Zhou likened this to a drop-in replacement for the unit’s containers, integrating with tools such as Istio, Dapr and OpenTelemetry Collector. ... When running applications through WebAssembly as sidecars in a cluster, the two main challenges involve distribution and deployment, as Zhou outlined. A naive approach bundles the Wasm runtime into a container, but a better method offloads the Wasm runtime into the shim process in containerd. This approach allows Kubernetes orchestration of Wasm workloads. The OCI artifact format for WebAssembly, enabling Wasm components to use the same distribution mechanisms as containers, is responsible for the distribution part, Zhou said.


Training Employees for the Future with Digital Humans

Digital humans leverage a host of advanced technologies, large language models, retrieval-augmented generation, and intelligent AI orchestrators, among them. They also use unique techniques like kinesthetic learning, or “learning by doing,” alongside on-screen visuals to better illustrate more complicated topics. Note that digital humans are not like traditional chatbots that follow structured dialog trees. Instead, they can respond dynamically to the employee's inputs to ensure interactions are as lifelike as possible. ... By allowing employees to apply their training in real-world scenarios, digital humans help them keep more information in a shorter amount of time, reducing traditional training timelines significantly. As a result, businesses will spend less money and time reskilling personnel. The training possibilities with digital humans are vast, helping employees learn to use new technologies and systems. In a sales setting, personnel can practice using new generative AI-powered customer service tools while a digital human pretends to be a customer. Digital humans could also help engineers in the automotive space learn how to use machine-learning solutions or operate 3D printing machines.


From Silos to Synergy: Transforming Threat Intelligence Sharing in 2025

Put simply, organizations must break down the silos between ALL teams involved in security. This is not just about understanding the organization’s cyber hygiene, but it is also about understanding the layers that an attacker would have to get through to exploit and conduct potentially nefarious activities within the business. Once this insight is gained this enables teams to work through requirements and align the CTI program for specific stakeholders. This means that both offense and defense teams are working together, mapping out the attack path and gaining a better understanding of defense. Doing this will provide a better understanding of offense as teams scout to look at what could be effective, going to the next layer to consider what might be vulnerable and whether they have mitigating controls in place to provide any additional prevention. ... In the past, teams working on-site together would document their work on a whiteboard. Now, with the advent of remote working, there are fewer opportunities to share in person, and a plethora of communication channels that lead to knowledge fragmentation as different people use different tools such as Slack or other messaging platforms, or would just share intelligence one-on-one.


Explained: The Multifaceted Nature of Digital Twins

Beyond operational improvements, digital twins also drive innovation at scale. Large enterprises with multiple R&D hubs can test new designs or processes in a virtual environment before deploying them globally. For example, an automotive company developing an electric vehicle can simulate how it will perform under different driving conditions, regulatory frameworks and consumer preferences in diverse markets - all within a digital twin. ... Building and maintaining a digital twin requires significant investment in IoT infrastructure, cloud computing, AI and skilled personnel. For many companies, particularly small and medium-sized enterprises, these costs can be prohibitive. A McKinsey study highlights that digital maturity - the ability to effectively integrate and utilize advanced technologies - is often a key barrier. Seventy-five percent of companies that have adopted digital-twin technologies are those that have achieved at least medium levels of complexity. Large enterprises can justify the cost of digital twins by applying them across multiple facilities or product lines, but for smaller companies, the benefits may not scale as effectively, making it harder to achieve a return on investment.


Design Patterns for Building Resilient Systems

You may have some parts of your system that are degrading performance and may be affecting cascading failures everywhere. So that means that when your client requests a specific part that’s working fine, it’s great, but you want to stop immediately what’s causing the fire. That way, you have different load balancing rules that I’ve defined here to say, okay, this part of our system is degrading performance; it’s starting to affect everything else, and it’s cascading failures. We’re just going to stop it so you can’t even make a request to this route because it’s the one causing all the issues. Having your clients handle that failure to that request gracefully can be incredibly important because then the rest of your system can still work. Maybe some particular routes you’re defining aren’t going to work; some parts of your system will just be unavailable, but it’s not taking down the entire thing. Ultimately, what I’m talking about there is bulkheads. ... Now, while the CrowdStrike incident didn’t directly affect me, it sure did indirectly because I knew about it right away from the alarms based on metrics. When used correctly within context, design patterns allow you to build a resilient system. Now, everything we had in place for resilience helped; they worked. But as always, when something like this happens, it makes you re-evaluate specific individual contexts. 



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - January 10, 2025

Meta puts the ‘Dead Internet Theory’ into practice

In the old days, when Meta was called Facebook, the company wrapped every new initiative in the warm metaphorical blanket of “human connection”—connecting people to each other. Now, it appears Meta wants users to engage with anyone or anything—real or fake doesn’t matter, as long as they’re “engaging,” which is to say spending time on the platforms and money on the advertised products and services. In other words, Meta has so many users that the only way to continue its previous rapid growth is to build users out of AI. The good news is that Meta’s “Dead Internet” projects are not going well. ... Meta is testing a program called “Creator AI,” which enables influencers to create AI-generated bot versions of themselves. These bots would be designed to look, act, sound, and write like the influencers who made them, and would be trained on the wording of their posts. The influencer bots would engage in interactive direct messages and respond to comments on posts, fueling the unhealthy parasocial relationships millions already have with celebrities and influencers on Meta platforms. The other “benefit” is that the influencers could “outsource” fan engagement to a bot. ... “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Connor Hayes, vice president of product for generative AI at Meta, said


Experts Highlight Flaws within Government’s Data Request Mandate Under DPDP Rules 2025

Tech Lawyer Varun Sen Bahl also points out the absence of an appellate mechanism for such ‘calls for information’ by the Central government, explaining that such an appeal process only extends against orders of the Data Protection Board. He explains, “This is problematic because it leaves Data Fiduciaries and Data Principals with no clear recourse against excessive data collection requests made under Section 36 read with Rule 22“. Bahl also notes that the provision lacks specific mention of guardrails like the European Union’s data minimisation principle under the General Data Protection Regulation (GDPR) while furnishing such information requests. ... Roy argues that the compliance burdens on Data Fiduciaries will increase and aggravate through sweeping requests and by invoking the non-disclosure clause. To explain, he cites the case of the Razorpay-AltNews situation in 2022, when the Government accessed the names and transaction details of the news platform’s donors via Razorpay ... To ensure that government officers and agencies don’t abuse this provision, Roy explains that “Fiduciaries must [as part of corporate governance] give periodic reports of the number of such demands”. Similarly, law enforcement and other agencies should also submit periodic reports of such requests to the Data Protection Board comprising details of cases where the non-disclosure clause is invoked.


How Edge Computing can Give OEMs a Competitive Advantage

Latency matters in warehouse automation too. Performing predictive maintenance on a shoe sorter, for example, could require real-time monitoring of actuators that do diversions every 40 milliseconds. Component-level computing power allows the system to respond to changing conditions with speed and efficiency levels that simply wouldn’t be possible with a cloud-based system. ... Edge components can also communicate with a system’s programmable logic controllers (PLCs), making their data immediately available to end users. Supporting software on the customer’s local network interprets this information, enabling predictive maintenance and other real-time insights while tracking historical trends over time. ... Edge technology enables you to build assets that deliver higher utilization to your customers. Much of this benefit comes from the greater efficiencies of predictive maintenance. Users have less downtime because unnecessary service is reduced or eliminated, and many problems can be resolved before they cause unplanned shutdowns. Smart components can also deliver more process consistency. Ordinarily, parts degrade over time, gradually losing speed and/or power. With edge capabilities, they can continuously adapt to changing conditions, including varying parcel weights and normal wear.


Have we reached the end of ‘too expensive’ for enterprise software?

LLMs are now changing the way companies approach problems that are difficult or impossible to solve algorithmically, although the term “language” in Large Language Models is misleading. ... GenAI enables a variety of features that were previously too complex, too expensive, or completely out of reach for most organizations because they required investments in customized ML solutions or complex algorithms. ... Companies need to recognize generative AI for what it is: a general-purpose technology that touches everything. It will become part of the standard software development stack, as well as an integral enabler of new or existing features. Ensuring the future viability of your software development requires not only acquiring AI tools for software development but also preparing infrastructure, design patterns and operations for the growing influence of AI. As this happens, the role of software architects, developers, and product designers will also evolve. They will need to develop new skills and strategies for designing AI features, handling non-deterministic outputs, and integrating seamlessly with various enterprise systems. Soft skills and collaboration between technical and non-technical roles will become more important than ever, as pure hard skills become cheaper and more automatable.


Is prompt engineering a 'fad' hindering AI progress?

Motivated by the belief that "a well-crafted prompt is essential for obtaining accurate and relevant outputs from LLMs," aggressive AI users -- such as ride-sharing service Uber -- have created whole disciplines around the topic. And yet, there is a reasoned argument to be made that prompts are the wrong interface for most users of gen AI, including experts. "It is my professional opinion that prompting is a poor user interface for generative AI systems, which should be phased out as quickly as possible," writes Meredith Ringel Morris, principal scientist for Human-AI Interaction for Google's DeepMind research unit, in the December issue of computer science journal Communications of the ACM. Prompts are not really "natural language interfaces," Morris points out. They are "pseudo" natural language, in that much of what makes them work is unnatural. ... In place of prompting, Morris suggests a variety of approaches. These include more constrained user interfaces with familiar buttons to give average users predictable results; "true" natural language interfaces; or a variety of other "high-bandwidth" approaches such as "gesture interfaces, affective interfaces (that is, mediated by emotional states), direct-manipulation interfaces


Building Resilience Into Cyber-Physical Systems Has Never Been This Mission-Critical

In our quest for cyber resilience, we sometimes—mistakenly—fixate on hypothetical doomsday scenarios. While this apocalyptic and fear-based thinking can be an instinctual response to the threats we face, it is not realistic or helpful. Instead, we must champion the progress, even incremental, that is achievable through focused, pragmatic measures—like cyber insurance. By reframing discussions around tangible outcomes such as financial stability and public safety, we can cultivate a clearer sense of priorities. Regulatory frameworks may eventually align incentives towards better cybersecurity practices, but in the interim, transferring risk via a measure like cyber insurance offers a potent mechanism to enhance visibility into risk mitigation strategies and implement better cyber hygiene accordingly. By quantifying potential losses and incentivizing proactive security measures, cyber insurance can catalyze a necessary, and overdue cultural shift towards resilience-oriented practices—and a safer world. We stand at a pivotal moment in American critical infrastructure cybersecurity. As hackers threaten to sabotage our vital systems for ransom, the financial damages ensued from incidents like Halliburton oblige us to stay alert and act proactively. 


Don't Fall Into the 'Microservices Are Cool' Trap and Know When to Stick to Monolith Instead

Over time, as monolith applications become less and less maintainable, some teams decide that the only way to solve the problem is to start refactoring by breaking their application into microservices. Other teams make this decision just because "microservices are cool." This process takes a lot of time and sometimes brings even more maintenance overhead. Before going into this, it's crucial to carefully consider all the pros and cons and ensure you've reached your current monolith architecture limits. And remember, it is easier to break than to build. ... As you see, the modular monolith is the way to get the best from both worlds. It is like running independent microservices inside a single monolith but avoiding collateral microservices overhead. One of the limitations you may have – is not being able to scale different modules independently. You will have as many monolith instances as required by the most loaded module, which may lead to excessive resource consumption. The other drawback is the limitations of using different technologies. ... When running a monolith application, you can usually maintain a simpler infrastructure. Options like virtual machines or PaaS solutions (such as AWS EC2) will suffice. Also, you can handle much of the scaling, configuration, upgrades, and monitoring manually or with simple tools. 


SEC rule confusion continues to put CISOs in a bind a year after a major revision

“There is so much fear out there right now because there is a lack of clarity,” Sullivan told CSO. “The government is regulating through enforcement actions, and we get incomplete information about each case, which leads to rampant speculation.” As things stand, CISOs and their colleagues must chart a tricky course in meeting reporting requirements in the event of a cyber security incident or breach, Shusko says. That means anticipating the need to deal with reporting requirements by making compliance preparation part of any incident response plan, Shusko says. If they must make a cyber incident disclosure, companies should attempt to be compliant and forthcoming while seeking to avoid releasing information that could inadvertently point towards unresolved security shortcomings that future attackers might be able to exploit. ... Given that clarity around disclosure isn’t always straightforward, there is no real substitute for preparedness, and that makes it essential to practise situations that would require disclosure through tabletops and other exercises, according to Simon Edwards, chief exec of security testing firm SE Labs. “Speaking as someone who is invested heavily in the security of my company, I’d say that the most obvious and valuable thing a CISO can do is roleplay through an incident.”


How adding capacity to a network could reduce IT costs

Have you heard the phrase “bandwidth economy of scale?” It’s a sophisticated way of saying that the cost per bit to move a lot of bits is less than it is to move a few. In the decades that information technology evolved from punched cards to PCs and mobile devices, we’ve taken advantage of this principle by concentrating traffic from the access edge inward to fast trunks. ... Higher capacity throughout the network means less congestion. It’s old-think, they say, to assume that if you have faster LAN connections to users and servers, you’ll admit more traffic and congest trunks. “Applications determine traffic,” one CIO pointed out. “The network doesn’t suck data into it at the interface. Applications push it.” Faster connections mean less congestion, which means fewer complaints, and more alternate paths to take without traffic delay and loss, which also reduces complaints. In fact, anything that creates packet loss, outages, even latency, creates complaints, and addressing complaints is a big source of opex. The complexity comes in because network speed impacts user/application quality of experience in multiple ways, ways beyond the obvious congestion impacts. When a data packet passes through a switch or router, it’s exposed to two things that can delay it.


Ephemeral environments in cloud-native development

An emerging trend in cloud computing is using ephemeral environments for development and testing. Ephemeral environments are temporary, isolated spaces created for specific projects. They allow developers to swiftly spin up an environment, conduct testing, and then dismantle it once the task is complete. ... At first, ephemeral environments sound ideal. The capacity for rapid provisioning aligns seamlessly with modern agile development philosophies. However, deploying these spaces is fraught with complexities that require thorough consideration before wholeheartedly embracing them. ... The initial setup and ongoing management of ephemeral environments can still incur considerable costs, especially in organizations that lack effective automation practices. If one must spend significant time and resources establishing these environments and maintaining their life cycle, the expected savings can quickly diminish. Automation isn’t merely a buzzword; it requires investment in tools, training, and sometimes a cultural shift within the organization. Many enterprises may still be tethered to operational costs that can potentially undermine the presumed benefits. This seems to be a systemic issue with cloud-native anything.



Quote for the day:

"The best leader brings out the best in those he has stewardship over." -- J. Richard Clarke

Daily Tech Digest - January 09, 2025

It’s remarkably easy to inject new medical misinformation into LLMs

By injecting specific information into this training set, it's possible to get the resulting LLM to treat that information as a fact when it's put to use. This can be used for biasing the answers returned. This doesn't even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, "a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web." ... rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term "incidental" data poisoning due to "existing widespread online misinformation." But a lot of that "incidental" information was generally produced intentionally, as part of a medical scam or to further a political agenda. ... Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence.


CIOs are rethinking how they use public cloud services. Here’s why.

Where are those workloads going? “There’s a renewed focus on on-premises, on-premises private cloud, or hosted private cloud versus public cloud, especially as data-heavy workloads such as generative AI have started to push cloud spend up astronomically,” adds Woo. “By moving applications back on premises, or using on-premises or hosted private cloud services, CIOs can avoid multi-tenancy while ensuring data privacy.” That’s one reason why Forrester predicts four out of five so called cloud leaders will increase their investments in private cloud by 20% this year. That said, 2025 is not just about repatriation. “Private cloud investment is increasing due to gen AI, costs, sovereignty issues, and performance requirements, but public cloud investment is also increasing because of more adoption, generative AI services, lower infrastructure footprint, access to new infrastructure, and so on,” Woo says. ... Woo adds that public cloud is costly for workloads that are data-heavy because organizations are charged both for data stored and data transferred between availability zones (AZ), regions, and clouds. Vendors also charge egress fees for data leaving as well as data entering a given AZ. “So for transfers between AZs, you essentially get charged twice, and those hidden transfer fees can really rack up,” she says. 


What CISOs Think About GenAI

“As a [CISO], I view this technology as presenting more risks than benefits without proper safeguards,” says Harold Rivas, CISO at global cybersecurity company Trellix. “Several companies have poorly adopted the technology in the hopes of promoting their products as innovative, but the technology itself has continued to impress me with its staggeringly rapid evolution.” However, hallucinations can get in the way. Rivas recommends conducting experiments in controlled environments and implementing guardrails for GenAI adoption. Without them, companies can fall victim to high-profile cyber incidents like they did when first adopting cloud. Dev Nag, CEO of support automation company QueryPal, says he had initial, well-founded concerns around data privacy and control, but the landscape has matured significantly in the past year. “The emergence of edge AI solutions, on-device inference capabilities, and private LLM deployments has fundamentally changed our risk calculation. Where we once had to choose between functionality and data privacy, we can now deploy models that never send sensitive data outside our control boundary,” says Nag. “We're running quantized open-source models within our own infrastructure, which gives us both predictable performance and complete data sovereignty.”


Scaling RAG with RAGOps and agents

To maximize their effectiveness, LLMs that use RAG also need to be connected to sources from which departments wish to pull data – think customer service platforms, content management systems and HR systems, etc. Such integrations require significant technical expertise, including experience with mapping data and managing APIs. Also, as RAG models are deployed at scale they can consume significant computational resources and generate large amounts of data. This requires the right infrastructure as well as the experience to deploy it, as well as the ability to manage data it supports across large organizations. One approach to mainstreaming RAG that has AI experts buzzing is RAGOps, a methodology that helps automate RAG workflows, models and interfaces in a way that ensures consistency while reducing complexity. RAGOps enables data scientists and engineers to automate data ingestion and model training, as well as inferencing. It also addresses the scalability stumbling block by providing mechanisms for load balancing and distributed computing across the infrastructure stack. Monitoring and analytics are executed throughout every stage of RAG pipelines to help continuously refine and improve models and operations.


Navigating Third-Party Risk in Procurement Outsourcing

Shockingly, only 57% of organisations have enterprise-wide agreements that clearly define which services can or cannot be outsourced. This glaring gap highlights the urgent need to create strong frameworks – not just for external agreements, but also for intragroup arrangements. Internal agreements, though frequently overlooked, demand the same level of attention when it comes to governance and control. Without these solid frameworks, companies are leaving themselves exposed to risks that could have been mitigated with just a little more attention to detail. Ongoing monitoring is also crucial to TPRM; organisations must actively leverage audit rights, access provisions and outcome-focused evaluations. This means assessing operational and concentration risks through severe yet plausible scenarios, ensuring they’re prepared for the worst-case while staying vigilant in everyday operations. ... As the complexity of third-party risk grows, so too does the role of AI and automation. The days of relying on spreadsheets and homegrown databases are long gone. Ed’s thoughts on this topic are unequivocal: “AI and automation are critical as third-party risk becomes increasingly complex. Significant work is required for initial risk assessments, pre-contract due diligence, post-contract monitoring, SLA reviews and offboarding.”


Five Ways Your Platform Engineering Journey Can Derail

Chernev’s first pitfall is when a company tries to start platform engineering by only changing the name of its current development practices, without doing the real work. “Simply rebranding an existing infrastructure or DevOps or SRE practice over to platform engineering without really accounting for evolving the culture within and outside the team to be product-oriented or focused” is a huge mistake ... Another major pitfall, he said, is not having and maintaining product backlogs — prioritized lists of work for the development team — that are directly targeting your developers. “For the groups who have backlogs, they are usually technology-oriented,” he said. “That misalignment in thinking across planning and missing feedback loops is unlikely to move progress forward within the organization. That ultimately leads the initiative to fail to deliver business value. Instead, they should be developer-centric,” said Chernev. ... This is another important point, said Chernev — companies that do not clearly articulate the value-add of their platform engineering charter to both technical and non-technical stakeholders inside their operations will not fully be able to reap the benefits of the platform’s use across the business.


Building generative AI applications is too hard, developers say

Given the number of tools they need to do their job, it’s no surprise that developers are loath to spend a lot of time adding another to their arsenal. Two thirds of them are only willing to invest two hours or less in learning a new AI development tool, with a further 22% allocating three to five hours, and only 11% giving more than five hours to the task. And on the whole, they don’t tend to explore new tools very often — only 21% said they check out new tools monthly, while 78% do so once every one to six months, and the remaining 2% rarely or never. The survey found that they tend to look at around six new tools each time. ... The survey highlights the fact that, while AI and generative AI are becoming increasingly important to businesses, the tools and techniques require to develop them are not keeping up. “Our survey results shed light on what we can do to help address the complexity of AI development, as well as some tools that are already helping,” Gunnar noted. “First, given the pace of change in the generative AI landscape, we know that developers crave tools that are easy to master.” And, she added, “when it comes to developer productivity, the survey found widespread adoption and significant time savings from the use of AI-powered coding tools.”


AI infrastructure – The value creation battleground

Scaling AI infrastructure isn’t just about adding more GPUs or building larger data centers – it’s about solving fundamental bottlenecks in power, latency, and reliability while rethinking how intelligence is deployed. AI mega clusters are engineering marvels – data centers capable of housing hundreds of thousands of GPUs and consuming gigawatts of power. These clusters are optimized for machine learning workloads with advanced cooling systems and networking architectures designed for reliability at scale. Consider Microsoft’s Arizona facility for OpenAI: with plans to scale up to 1.5 gigawatts across multiple sites, it demonstrates how these clusters are not just technical achievements but strategic assets. By decentralizing compute across multiple data centers connected via high-speed networks, companies like Google are pioneering asynchronous training methods to overcome physical limitations such as power delivery and network bandwidth. Scaling AI is an energy challenge. AI workloads already account for a growing share of global data center power demand, which is projected to double by 2026. This creates immense pressure on energy grids and raises urgent questions about sustainability.


4 Leadership Strategies For Managing Teams In The Metaverse

Leaders must develop new skills and adopt innovative strategies to thrive in the metaverse. Here are some key approaches:Invest in digital literacy—Leaders must become fluent in the tools and technologies that power the metaverse. This includes understanding VR/AR platforms, blockchain applications and collaborative software such as Slack, Trello and Figma. Emphasize inclusivity—The metaverse has the potential to democratize access to opportunities, but only if it’s designed with inclusivity in mind. Leaders should ensure that virtual spaces are accessible to employees of all abilities and backgrounds. This might include providing hardware like VR headsets or ensuring platforms support diverse communication styles. Create rituals for connection—Leaders can foster connection through virtual rituals and gatherings in the absence of physical offices. These activities, from weekly team check-ins to informal virtual “watercooler” chats, help build camaraderie and maintain a sense of community. Focus on well-being—Effective leaders prioritize employee well-being by setting clear boundaries, encouraging breaks and supporting mental health.


How AI will shape work in 2025 — and what companies should do now

“The future workforce will likely collaborate more closely with AI tools. For example, marketers are already using AI to create more personalized content, and coders are leveraging AI-powered code copilots. The workforce will need to adapt to working alongside AI, figuring out how to make the most of human strengths and AI’s capabilities. “AI can also be a brainstorming partner for professionals, enhancing creativity by generating new ideas and providing insights from vast datasets. Human roles will increasingly focus on strategic thinking, decision-making, and emotional intelligence. ... “Companies should focus on long-term strategy, quality data, clear objectives, and careful integration into existing systems. Start small, scale gradually, and build a dedicated team to implement, manage, and optimize AI solutions. It’s also important to invest in employee training to ensure the workforce is prepared to use AI systems effectively. “Business leaders also need to understand how their data is organized and scattered across the business. It may take time to reorganize existing data silos and pinpoint the priority datasets. To create or effectively implement well-trained models, businesses need to ensure their data is organized and prioritized correctly.



Quote for the day:

"The world is starving for original and decisive leadership." -- Bryant McGill

Daily Tech Digest - January 08, 2025

GenAI Won’t Work Until You Nail These 4 Fundamentals

Too often, organizations leap into GenAI fueled by excitement rather than strategic intent. The urgency to appear innovative or keep up with competitors drives rushed implementations without distinct goals. They see GenAI as the “shiny new [toy],” as Kevin Collins, CEO of Charli AI, aptly puts it, but the reality check comes hard and fast: “Getting to that shiny new toy is expensive and complicated.” This rush is reflected in over 30,000 mentions of AI on earnings calls in 2023 alone, signaling widespread enthusiasm but often without the necessary clarity of purpose. ... The shortage of strategic clarity isn’t the only roadblock. Even when organizations manage to identify a business case, they often find themselves hamstrung by another pervasive issue: their data. Messy data hampers organizations’ ability to mature beyond entry-level use cases. Data silos, inconsistent formats and incomplete records create bottlenecks that prevent GenAI from delivering its promised value. ... Weak or nonexistent governance structures expose companies to various ethical, legal and operational risks that can derail their GenAI ambitions. According to data from an Info-Tech Research Group survey, only 33% of GenAI adopters have implemented clear usage policies. 


Inside the AI Data Cycle: Understanding Storage Strategies for Optimised Performance

The AI Data Cycle is a six-stage framework, beginning with the gathering and storing of raw data. In this initial phase, data is collected from multiple sources, with a focus on assessing its quality and diversity, which establishes a strong foundation for the stages that follow. For this phase, high-capacity enterprise hard disk drives (eHDDs) are recommended, as they provide high storage capacity and cost-effectiveness per drive. In the next stage, data is prepared for ingestion, and this is where insight from the initial data collection phase is processed, cleaned and transformed for model training. To support this phase, data centers are upgrading their storage infrastructure – such as implementing fast data lakes – to streamline data preparation and intake. At this point, high-capacity SSDs play a critical role, either augmenting existing HDD storage or enabling the creation of all-flash storage systems for faster, more efficient data handling. Next is the model training phase, where AI algorithms learn to make accurate predictions using the prepared training data. This stage is executed on high-performance supercomputers, which require specialised, high-performing storage to function optimally. 


Buy or Build: Commercial Versus DIY Network Automation

DIY automation can be tailored to your specific network and, in some cases, to meet security or compliance requirements more easily than vendor products. And they come at a great price: free! The cost of a commercial tool is sometimes higher than the value it creates, especially if you have unusual use cases. But DIY tools take time to build and support. Over 50% of organizations in EMA’s survey spend 6-20 hours per week debugging and supporting homegrown tools. Cultural preferences also come into play. While engineers love to grumble about vendors and their products, that doesn’t mean they prefer DIY. In my experience, NetOps teams are often set in their ways, preferring manual processes that do not scale up to match the complexity of modern networks. Many network engineers do not have the coding skills to build good automation, and most don't think about how to tackle problems with automation broadly. The first and most obvious fix for the issues holding back automation is simply for automation tools to get better. They must have broad integrations and be vendor neutral. Deep network mapping capabilities help resolve the issue of legacy networks and reduce the use cases that require DIY. Low or no-code tools help ease budget, staffing, and skills issues.


How HR can lead the way in embracing AI as a catalyst for growth

Common workplace concerns include job displacement, redundancy, bias in AI decision-making, output accuracy, and the handling of sensitive data. Tracy notes that these are legitimate worries that HR must address proactively. “Clear policies are essential. These should outline how AI tools can be used, especially with sensitive data, and safeguards must be in place to protect proprietary information,” she explains. At New Relic, open communication about AI integration has built trust. AI is viewed as a tool to eliminate repetitive tasks, freeing time for employees to focus on strategic initiatives. For instance, their internally developed AI tools support content drafting and research, enabling leaders like Tracy to prioritize high-value activities, such as driving organizational strategy. “By integrating AI thoughtfully and transparently, we’ve created an environment where it’s seen as a partner, not a threat,” Tracy says. This approach fosters trust and positions AI as an ally in smarter, more secure work practices. The key is to highlight how AI can help everyone excel in their roles and elevate the work they do every day. While it’s realistic to acknowledge that some aspects of our jobs—or even certain roles—may evolve with AI, the focus should be on how we integrate it into our workflow and use it to amplify our impact and efficiency,” notes Tracy.


Cloud providers are running out of ‘next big things’

Yes, every cloud provider is now “an AI company,” but let’s be honest — they’re primarily engineering someone else’s innovations into cloud-consumable services. GPT-4 through Microsoft Azure? That’s OpenAI’s innovation. Vector databases? They came from the open source community. Cloud providers are becoming AI implementation platforms rather than AI innovators. ... The root causes of the slowdown in innovation are clear. Market maturity indicates that the foundational issues in cloud computing have mostly been resolved. What’s left are increasingly specialized niche cases. Second, AWS, Azure, and Google Cloud are no longer the disruptors — they’re the defenders of market share. Their focus has shifted from innovation to optimization and retention. A defender’s mindset manifests itself in product strategies. Rather than introducing revolutionary new services, cloud providers are fine-tuning existing offerings. They’re also expanding geographically, with the hyperscalers expected to announce 30 new regions in 2025. However, these expansions are driven more by data sovereignty requirements than innovative new capabilities. This innovation slowdown has profound implications for enterprises. Many organizations bet their digital transformation on cloud-native architectures with continuous innovation. 


Historical Warfare’s Parallels with Cyber Warfare

In 1942, the British considered Singapore nearly impregnable. They fortified its coast heavily, believing any attack would come from the sea. Instead, the Japanese stunned the defenders by advancing overland through dense jungle terrain the British deemed impassable. This unorthodox approach using bicycles in great numbers and small tracks through the jungle enabled the Japanese forces to hit the defences at the weakest point and well ahead of the projected time catching the British defences off guard. In cybersecurity, this corresponds to zero-day vulnerabilities and unconventional attack vectors. Hackers exploit flaws that defenders never saw coming, turning supposedly secure systems into easy marks. The key lesson is to never to grow complacent because you never know what you can be hit with and when. ... Cyber attackers also use psychology against their targets. Phishing emails appeal to curiosity, trust, greed, or fear thus luring victims into clicking malicious links or revealing passwords. Social engineering exploits human nature rather than code and defenders must recognise that people, not just machines, are the frontline. Regular training, clear policies, and an ingrained culture of healthy scepticism which is present in most IT staff can thwart even the most artful psychological ploys.


Insider Threat: Tackling the Complex Challenges of the Enemy Within

Third-party background checking can only go so far. It must be supported by old fashioned and experienced interview techniques. Omri Weinberg, co-founder and CRO at DoControl, explains his methodology “We’re primarily concerned with two types of bad actors. First, there are those looking to use the company’s data for nefarious purposes. These individuals typically have the skills to do the job and then some – they’re often overqualified. They pose a severe threat because they can potentially access and exploit sensitive data or systems.” The second type includes those who oversell their skills and are actually under or way underqualified. “While they might not have malicious intent, they can still cause significant damage through incompetence or by introducing vulnerabilities due to their lack of expertise. For the overqualified potential bad actors, we’re wary of candidates whose skills far exceed the role’s requirements without a clear explanation. For the underqualified group, we look for discrepancies between claimed skills and actual experience or knowledge during interviews.” This means it is important to probe the candidate during the interview to gauge the true skill level of the candidate. “it’s essential that the person evaluating the hire has the technical expertise to make these determinations,” he added.


Raise your data center automation game with easy ecosystem integration

If integrations are the key, then the things you look for to understand whether a product is flashy or meaningful should change. The UI matters, but the way tools are integrated is the truly telling characteristic. What APIs exist? How is data normalized? Are interfaces versioned and maintained across different releases? Can you create complex dashboards that pull things together from different sources using no-code models that don't require source access to contextualize your environment? How are workflows strung together into more complex operations? By changing your focus, you can start to evaluate these platforms based on how well they integrate rather than on how snazzy the time series database interface is. Of course, things like look and feel matter, but anyone who wants to scale their operations will realize that the UI might not even be the dominant consumption model over time. Is your team looking to click their way through to completion? ... Wherever you are in this discovery process, let me offer some simple advice: Expand your purview from the network to the ecosystem and evaluate your options in the context of that ecosystem. When you do that effectively, you should know which solutions are attractive but incremental and which are likely to create more durable value for you and your organization.


Why Scrum Masters Should Grow Their Agile Coaching Skills

More than half of the organizations surveyed report that finding scrum masters with the right combination of skills to meet their evolving demands is very challenging. Notably, 93% of companies seek candidates with strong coaching skills but state that it’s one of the skills hardest to find. Building strong coaching and facilitation skills can help you stand out in the job market and open doors to new career opportunities. As scrum masters are expected to take on increasingly strategic roles, your skills become even more valuable. Senior scrum masters, in particular, are called upon to handle politically sensitive and technically complex situations, bridging gaps between development teams and upper management. Coaching and facilitation skills are requested nearly three times more often for senior scrum master roles than for other positions. Growing these coaching competencies can give you an edge and help you make a bigger impact in your career. ... Who wouldn’t want to move up in their career into roles with greater responsibilities and bigger impact? Regardless of the area of the company you’re in—product, sales, marketing, IT, operations—you’ll need leadership skills to guide people and enable change within the organization. 


Scaling penetration testing through smart automation

Automation undoubtedly has tremendous potential to streamline the penetration testing lifecycle for MSSPs. The most promising areas are the repetitive, data-intensive, and time-consuming aspects of the process. For instance, automated tools can cross-reference vulnerabilities against known exploit databases like CVE, significantly reducing manual research time. They can enhance accuracy by minimizing human error in tasks like calculating CVSS scores. Automation can also drastically reduce the time required to compile, format, and standardize pen-testing reports, which can otherwise take hours or even days depending on the scope of the project. For MSSPs handling multiple client engagements, this could translate into faster project delivery cycles and improved operational efficiency. For their clients – it enables near real-time responses to vulnerabilities, reducing the window of exposure and bolstering their overall security posture. However – and this is crucial – automation should not be treated as a silver bullet. Human expertise remains absolutely indispensable in the testing itself. The human ability to think creatively, to understand complex system interactions, to develop unique attack scenarios that an algorithm might miss—these are irreplaceable. 



Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson