Daily Tech Digest - February 23, 2025


Quote for the day:

“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw



Google Adds Quantum-Resistant Digital Signatures to Cloud KMS

After a process that kicked off nearly a decade ago, NIST officially published the first three PQC standards last August. The standards, based on advanced encryption algorithms, are now known as FIPS 203, FIPS 204, and FIPS 205, although additional specifications are still under review by NIST. Google's strategy calls for support for the current and future NIST standards. While Cloud KMS will eventually support all three NIST standards, Google's initial release implements the two digital signature algorithms: FIPS 204, which enables lattice-based digital signatures, and FIPS 205, which is for stateless hash-based digital signatures. Porter says support for FIPS 203, which is for asymmetric cryptography, will come later in the year. ... "Making the open source libraries and Cloud KMS to support those specific signatures with those keys will give the opportunity for our customers to validate those performance implications to their environments when they use those keys for the signing of longer linked environments," Porter explains. Google is not the only major player adding open source libraries that support the NIST standards. In September, Microsoft started releasing support for the NIST standards in SymCrypt, its open source core cryptographic library main cryptographic library used in Azure, Microsoft 365, Windows 11, Windows 10, Windows Server, Azure Stack HCI, and Azure Linux. 


The most critical job skill you need to thrive in the AI revolution

A few weeks ago, The World Economic Forum dropped its predictions for the future of jobs and the seismic shift in the workforce over the next five years (2030). ... Half of the employers plan to reorient business strategies in response to the rise of AI. In fact, 2 in 3 plan to hire for AI-specific skills (this is where the new jobs will come from). 40% of those same businesses also think their workforce will shrink due to AI automating tasks. On the surface, this might seem like doom and gloom, but remember, we are talking about 78 million new jobs by 2030. It is safe to assume some of that workforce will find employment in companies that don't exist yet. Another insight that stood out to me but deserves its own article is that an aging population will drive the demand for more healthcare jobs. This could be a huge opportunity. Let me know in the comments if you want me to discuss the possibilities. ... As for your big opportunity, I feel like everyone is so focused on the shiny objects, like what are the best prompts or the best tool? Those are fine, but not enough focus is placed on the soft skills. It's as if we're forgetting that even though we use AI to create, our creations are still intended for humans. If I had to say it another way, it is almost like some businesses are using AI and becoming sloppy. Not caring about the customer, and so on.


MDR, EDR Markets See Wave of M&A as Competition Intensifies

Organizations traditionally relied on managed security services for log monitoring and basic alerting. MDR took this a step further by offering real-time threat detection, investigation and response. At the same time, vendors came to realize that endpoint visibility alone through EDR was insufficient, leading to XDR, which integrates signals from multiple layers, including cloud, network and identity systems. "It's complicated to learn the skills to be able to operate these kinds of platforms really efficiently, and it's even more challenging to be able to do it 24/7/365," Levy said. "Most organizations simply aren't equipped to be able to run a global SOC with multiple shifts." While XDR expanded detection capabilities, Levy said it also introduced operational complexities, with most companies lacking the expertise and resources to manage a sophisticated security platform 24/7, leading to the rise of MDR as a fully managed security service. True MDR should go beyond the endpoint and include threat detection across cloud environments, networks and identity systems, Schneider said. "Once partners get engaged and really see the value in managed EDR, the conversation immediately goes to, 'Can you do the same thing for my firewalls? Can you do the same thing for my NDR solution? Can you do the same thing for my identity solution?'" 


We need to talk about the F word (‘friction’ in enterprise, that is)

By striking the right balance, companies can use friction to their advantage. Friction, after all, is another word for feedback — so products that become completely frictionless stop responding to users’ needs. The pursuit of frictionlessness can launch you skywards, but over time you’ll struggle to course-correct. Eventually, gravity will drag you back to earth. This isn’t hypothetical: Research shows that friction makes many systems — including businesses — smarter and more resilient. A bit of strategic inconvenience can improve market performance, with investors making smarter decisions when they’re forced to slow down and think about trades. ... For technologists, that means asking: What problems are you solving by eliminating friction — and what problems might you create, now or in future, by doing so? Every design choice brings tradeoffs, but balancing risks and rewards to design for the right level of friction enables both rapid growth and long-term sustainability. Such an approach could also make it easier to have grown-up conversations about the need to regulate AI and other emerging technologies. Regulations always add friction — but once we accept that some friction can be valuable, we can work collaboratively with policymakers to find the right level of friction to support innovation while protecting and respecting consumers.


Struggling to Become Truly Data-Driven? Focus on Access and Culture, Not Tech

Success in data strategy requires strong leadership commitment and cultural transformation. The playbook emphasizes the role of leaders in advancing data literacy and encouraging data-driven decision-making. This includes identifying and empowering "data champions" across the organization and creating communities of practice to share knowledge and best practices. Training and development play crucial roles in building data capabilities. The report recommends targeted training programs for employees central to data usage, utilizing both online and in-person resources. Investment in training yields significant returns through improved efficiency, better decision-making, and enhanced customer service. However, training should not be a one-size-fits-all approach; it should be tailored to different roles and skill levels within the organization. The report emphasizes that becoming a data-driven organization is an ongoing journey rather than a destination. Financial institutions must continuously evolve their data strategies to keep pace with changing technology and customer expectations. This includes exploring emerging technologies like artificial intelligence and machine learning, while ensuring they maintain a strong foundation in data quality and governance.


Introduction to Service Mesh

A service mesh acts as a layer encompassing services running within a distributed application that facilitates dependable and visible communication among microservices. It oversees how services interact with one another, handling tasks such as discovering services, distributing workloads evenly, recovering from failures, collecting metrics and monitoring performance. ... By separating network management duties from the application code, a service mesh makes it easier for developers and operations teams to handle tasks efficiently. Developers can concentrate on creating business logic without the need to deal with integrating service discovery, load balancing or security protocols into their applications. Operations teams can take advantage of the management of policies and configurations provided by the service mesh’s control plane. ... When selecting a service mesh, it’s important to consider scalability. Make sure that the service mesh is capable of accommodating the size of your microservices setup and can adapt as your application grows. Assess how the service mesh affects your system’s performance and the load added by sidecar proxies. A scalable service mesh should deliver performance and minimal delays when adding more services and incurring higher traffic levels.


Why enterprises fail at finops

One of the most significant challenges is the lack of integration between the finops and engineering teams responsible for building and deploying cloud applications. McKinsey’s report showed that many organizations struggle to capture savings beyond the immediate finops team’s mandate because these teams often lack the incentives or access to cloud cost data. Consequently, many well-meaning optimization efforts fall by the wayside as engineers juggle multiple priorities or lack the resources to focus on cost-related improvements. Another issue is the lack of systematic implementation of finops best practices. This is where FaC becomes essential by incorporating finops processes directly into application configurations to make them foolproof. FaC can dramatically reduce costs by integrating financial management principles directly into the infrastructure management life cycle. Organizations can enforce budget constraints by automatically identifying opportunities for cost reduction, supporting more efficient resource scheduling, and employing cloud-native services to decrease operational cloud resource expenses. Many organizations struggle with basic cloud hygiene practices. They’re not effectively identifying and eliminating obvious sources of waste, such as underutilized resources, oversized virtual machines, and redundant storage volumes. 


Building the next-gen creator economy with AI agents

Autonomous agents simplify content distribution and monetization by automating tasks such as pricing, licensing, and revenue sharing, freeing creators to focus on their craft. For instance, these agents can optimize pricing strategies based on market demand or manage revenue splits transparently. Unlike traditional AI tools, decentralized agents can operate trustlessly onchain, ensuring transparency, reducing costs, and eliminating third-party intermediaries. By leveraging programmable rules and onchain verification, autonomous agents also allow creators to explore new revenue streams—such as micro-licensing or fractional ownership of digital assets—giving them control over their intellectual property while tapping into innovative monetization models. Ethical concerns, such as licensing and copyright issues, can be addressed through programmable licensing rights embedded in content metadata. ... The use of trustless, onchain computation means that creators are not reliant on centralized APIs or platforms, which could compromise their data or artistic vision. Unlike many current AI agents that depend on centralized APIs like OpenAI, these decentralized agents operate sustainably and transparently, avoiding vulnerabilities tied to centralized control. 


The Future of Cybersecurity: AI-Driven Threat Detection and Prevention

Artificial intelligence has revolutionized the way organizations respond to threat detection. Contemporary AI systems are capable of examining huge volumes of network traffic, log data, and user activity in real-time, detecting subtle patterns that could represent a security compromise. AI-powered Security Information and Event Management (SIEM) solutions can examine billions of security events per day, correlating seemingly unrelated activity to reveal advanced attack campaigns. ... Machine learning algorithms are now shifting from reactive security to predictive threat prevention. By examining past patterns of attacks and present system activity, AI can detect potential security threats before they become real threats. This is especially effective in insider threat detection, where AI algorithms can detect slight variations in employee behavior that could be a sign of compromise or malicious activity. ... When an incident is detected, AI-based security orchestration platforms can respond automatically, cutting in half the lag time between detection and mitigation. They can isolate infected systems, withdraw misused credentials, and apply countermeasures in seconds – operations that it would take human teams hours or even days to do manually.


Generative AI is already being used in journalism – here’s how people feel about it

What if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make “good” images are different to what a computer might think? These criteria may also change over time or in different contexts. Even something as simple as lightening or darkening an image can cause a furore when politics are involved. AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context. Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others’ content. AI-generated news alerts have also gotten the facts wrong. ... Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use. Most of our participants were comfortable with turning to AI to create icons for an infographic.

Daily Tech Digest - February 21, 2025


Quote for the day:

“The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things.” -- Ronald Reagan


Rethinking Network Operations For Cloud Repatriation

Repatriation introduces significant network challenges, further amplified by the adoption of disruptive technologies like SDN, SD-WAN, SASE and the rapid integration of AI/ML, especially at the edge. While beneficial, these technologies add complexity to network management, particularly in areas such as traffic routing, policy enforcement, and handling the unpredictable workloads generated by AI. ... Managing a hybrid environment spanning on-premises and public cloud resources introduces inherent complexity. Network teams must navigate diverse technologies, integrate disparate tools and maintain visibility across a distributed infrastructure. On-premises networks often lack the dynamic scalability and flexibility of cloud environments. Absorbing repatriated workloads further complicates existing infrastructure, making monitoring and troubleshooting more challenging. ... Repatriated workloads introduce potential security vulnerabilities if not seamlessly integrated into existing security frameworks. On-premises security stacks not designed for the increased traffic volume previously handled by SASE services can introduce latency and performance bottlenecks. Adjustments to SD-WAN routing and policy enforcement may be necessary to redirect traffic to on-premises security resources.


For the AI era, it’s time for BYOE: Bring Your Own Ecosystem

We can no longer limit user access to one or two devices — we must address the entire ecosystem. Instead of forcing users down a single, constrained path, security teams need to acknowledge that users will inevitably venture into unsafe territory, and focus on strengthening the security of the broader environment. In 2015, we as security practitioners could get by with placing “do not walk on the grass” signs and ushering users down manicured pathways. In 2025, we need to create more resilient grass. ... The risk extends beyond basic access. Forty-percent of employees download customer data to personal devices, while 33% alter sensitive data, and 31% approve large financial transactions. And, most alarming, 63% use personal accounts on their work laptops — most commonly Google — to share work files and create documents, effectively bypassing email filtering and data loss prevention (DLP) systems. ... Browser-based access exposes users to risks from malicious plugins, extensions and post authentication compromise, while the increasing reliance on SaaS applications creates opportunities for supply chain attacks. Personal accounts serve as particularly vulnerable entry points, allowing threat actors to leverage compromised credentials or stolen authentication tokens to infiltrate corporate networks.


DARPA continues work on technology to combat deepfakes

The rapid evolution of generative AI presents a formidable challenge in the arms race between deepfake creators and detection technologies. As AI-driven content generation becomes more sophisticated, traditional detection mechanisms are at a fast risk of becoming obsolete. Deepfake detection relies on training machine learning models on large datasets of genuine and manipulated media, but the scarcity of diverse and high-quality datasets can impede progress. Limited access to comprehensive datasets has made it difficult to develop robust detection systems that generalize across various media formats and manipulation techniques. To address this challenge, DARPA puts a strong emphasis on interdisciplinary collaboration. By partnering with institutions such as SRI International and PAR Technology, DARPA leverages cutting-edge expertise to enhance the capabilities of its deepfake detection ecosystem. These partnerships facilitate the exchange of knowledge and technical resources that accelerate the refinement of forensic tools. DARPA’s open research model also allows diverse perspectives to converge, fostering rapid innovation and adaptation in response to emerging threats. Deepfake detection also faces significant computational challenges. Training deep neural networks to recognize manipulated media requires extensive processing power and large-scale data storage.


AI Agents: Future of Automation or Overhyped Buzzword?

AI agents are not just an evolution of AI; they are a fundamental shift in IT operations and decision-making. These agents are being increasingly integrated into Predictive AIOps, where they autonomously manage, optimize, and troubleshoot systems without human intervention. Unlike traditional automation, which follows pre-defined scripts, AI agents dynamically predict, adapt, and respond to system conditions in real time. ... AI agents are transforming IT management and operational resilience. Instead of just replacing workflows, they now optimize and predict system health, automatically mitigating risks and reducing downtime. Whether it's self-repairing IT infrastructure, real-time cybersecurity monitoring, or orchestrating distributed cloud environments, AI Agents are pushing technology toward self-governing, intelligent automation. ... The future of AI agents is both thrilling and terrifying. Companies are investing in large action models — next-gen AI that doesn’t just generate text but actually does things. We’re talking about AI that can manage entire business processes or run a company’s operations without human intervention. ... AI agents aren’t just another tech buzzword — they represent a fundamental shift in how AI interacts with the world. Sure, we’re still in the early days, and there’s a lot of fluff in the market, but make no mistake: AI agents will change the way we work, live, and do business.


Optimizing Cloud Security: Managing Sprawl, Technical Debt, and Right-Sizing Challenges

Technical debt is the implied cost of future IT infrastructure rework caused by choosing expedient IT solutions like shortcuts, software patches or deferred IT upgrades over long-term, sustainable designs. It’s easily accrued when under pressure to innovate quickly but leads to waste and security gaps and vulnerabilities that compromise an organization’s integrity, making systems more susceptible to cyber threats. Technical debt can also be costly to eradicate, with companies spending an average of 20-40% of their IT budgets on addressing it. ... Cloud sprawl refers to the uncontrolled proliferation of cloud services, instances, and resources within an organization. It often results from rapid growth, lack of visibility, and decentralized decision-making. At Surveil, we have over 2.5 billion data points to lean on to identify trends and we know that organizations with unmanaged cloud environments can see up to 30% higher cloud costs due to redundant and idle resources.Unchecked cloud sprawl can lead to increased security vulnerabilities due to unmanaged and unmonitored resources. ... Right-sizing involves aligning IT resources precisely with the demands of applications or workloads to optimize performance and cost. Our data shows that organizations that effectively right-size their IT estate can reduce cloud costs by up to 40%, unlocking business value to invest in other business priorities. 


How businesses can avoid a major software outage

Software bugs and bad code releases are common culprits behind tech outages. These issues can arise from errors in the code, insufficient testing, or unforeseen interactions among software components. Moreover, the complexity of modern software systems exacerbates the risk of outages. As applications become more interconnected, the potential for failures increases. A seemingly minor bug in one component can have far-reaching consequences, potentially bringing down entire systems or services. ... The impact of backup failures can be particularly devastating as they often come to light during already critical situations. For instance, a healthcare provider might lose access to patient records during a primary system failure, only to find that their backup data is incomplete or corrupted. Such scenarios underscore the importance of not just having backup systems, but ensuring they are fully functional, up-to-date, and capable of meeting the organization's recovery needs. ... Human error remains one of the leading causes of tech outages. This can include mistakes made during routine maintenance, misconfigurations, or accidental deletions. In high-pressure environments, even experienced professionals can make errors, especially when dealing with complex systems or tight deadlines.


Serverless was never a cure-all

Serverless architectures were originally promoted as a way for developers to rapidly deploy applications without the hassle of server management. The allure was compelling: no more server patching, automatic scalability, and the ability to focus solely on business logic while lowering costs. This promise resonated with many organizations eager to accelerate their digital transformation efforts. Yet many organizations adopted serverless solutions without fully understanding the implications or trade-offs. It became evident that while server management may have been alleviated, developers faced numerous complexities. ... The pay-as-you-go model appears attractive for intermittent workloads, but it can quickly spiral out of control if an application operates under unpredictable traffic patterns or contains many small components. The requirement for scalability, while beneficial, also necessitates careful budget management—this is a challenge if teams are unprepared to closely monitor usage. ... Locating the root cause of issues across multiple asynchronous components becomes more challenging than in traditional, monolithic architectures. Developers often spent the time they saved from server management struggling to troubleshoot these complex interactions, undermining the operational efficiencies serverless was meant to provide.


AI Is Improving Medical Monitoring and Follow-Up

Artificial intelligence technologies have shown promise in managing some of the worst inefficiencies in patient follow-up and monitoring. From automated scheduling and chatbots that answer simple questions to review of imaging and test results, a range of AI technologies promise to streamline unwieldy processes for both patients and providers. ... Adherence to medication regimens is essential for many health conditions, both in the wake of acute health events and over time for chronic conditions. AI programs can both monitor whether patients are taking their medication as prescribed and urge them to do so with programmed notifications. Feedback gathered by these programs can indicate the reasons for non-adherence and help practitioners to devise means of addressing those problems. ... Using AI to monitor the vital signs of patients suffering from chronic conditions may help to detect anomalies -- and indicate adjustments that will stabilize them. Keeping tabs on key indicators of health such as blood pressure, blood sugar, and respiration in a regular fashion can establish a baseline and flag fluctuations that require follow up treatment using both personal and demographic data related to age and sex by comparing it to available data on similar patients.


IT infrastructure complexity hindering cyber resilience

Given the rapid evolution of cyber threats and continuous changes in corporate IT environments, failing to update and test resilience plans can leave businesses exposed when attacks or major outages occur. The importance of integrating cyber resilience into a broader organizational resilience strategy cannot be overstated. With cybersecurity now fundamental to business operations, it must be considered alongside financial, operational, and reputational risk planning to ensure continuity in the face of disruptions. ... Leaders also expect to face adversity in the near future with 60% anticipating a significant cybersecurity failure within the next six months, which reflects the sheer volume of cyber attacks as well as a growing recognition that cloud services are not immune to disruptions and outages. ... Eirst and most importantly, it removes IT and cybersecurity complexity–the key impediment to enhancing cyber resilience. Eliminating traditional security dependencies such as firewalls and VPNs not only reduces the organization’s attack surface, but also streamlines operations, cuts infrastructure costs, and improves IT agility. ... The second big win is the inability of attackers to move laterally should a compromise at an endpoint occur. Users are verified and given the lowest privileges necessary each time they access a corporate resource, meaning ransomware and other data-stealing threats are far less of a concern.


Is subscription-based networking the future?

There are several factors making NaaS an attractive proposition. One of the most significant is the growing demand for flexibility. Traditional networking models often require upfront investments and long-term commitments, which are restrictive for organisations that need to scale their infrastructure quickly or adapt to changing needs. In contrast, a subscription model allows businesses to pay only for what they use, making it easier to adjust capacity and features as needed. Cost efficiency is another big driver. With networking delivered as a service, organisations can move away from large capital expenditures toward predictable, operational costs. This helps IT teams manage budgets more effectively while reducing the need to maintain and upgrade hardware. It also enables companies to access new technologies without costly refresh cycles. Security and compliance are becoming increasingly complex, especially for companies handling sensitive data. NaaS solutions often come with built-in security updates, compliance tools, and proactive monitoring, helping businesses stay ahead of emerging threats. Instead of managing security in-house, IT teams can rely on service providers to ensure their networks remain protected and up to date. Additionally, the rise of cloud computing and hybrid work has accelerated the need for more agile and scalable networking solutions.

Daily Tech Digest - February 20, 2025


Quote for the day:

"Increasingly, management's role is not to organize work, but to direct passion and purpose." -- Greg Satell


The Business Case for Network Tokenization in Payment Ecosystems

Network tokenization replaces sensitive Primary Account Numbers with tokens, rendering stolen data useless to fraudsters and addressing a major area of fraud: online payments. "Fraud rates are seven times higher online than in physical stores, as criminals exploit exposed card numbers," Mastercard's chief digital officer Pablo Fourez told Information Security Media Group. Shifting to tokenization protects businesses from financial losses and safeguards reputation and customer trust. ... But adoption of network tokenization does come with challenges including issuer readiness, regulatory hurdles and inconsistent implementations. Integrating network tokenization across multiple card networks requires multiple integrations, ensuring interoperability and maintaining high security standards, Fourez said. Compliance with varying regulatory requirements and achieving scalability without performance issues can be resource-intensive, he said. Ramakrishnan points to delays in token provisioning that may slow the speed of transactions if the technology is not scalable. Situations in which one entity in the payment ecosystem does not use network tokens can be major failure points that can lead to transaction failure and cart abandonment.


The hidden gap in cyber recovery: What happens when roles and processes are overlooked

There’s a big difference between disaster recovery (DR) and cyber recovery. For DR, infrastructure and backup teams are the central players and an organization can be up and running in no time. Cyber recovery, however, involves the entire business — backup teams, network teams, cloud personnel, incident response teams from security, teams that are validating the active directory before restores, as well as the application owners and business owners that depend on those functions. ... “There are bigger questions that you only get to by testing your process,” Grantham says. “Whatever your business is, it’s about looking at that data and saying, how do I provide access in this modified environment? For every one of the applications supporting that, having a run book to say, this is the people, the process, linked to the technology to get me to a user in the system performing their daily function because they need to be able to do their job. That run book gets them there. If your data is just sitting on a hard drive in the middle of a data center, how does that help your business?” ... “The idea that cyber recovery strategies require continual evolution, just like zero trust is an evolution of different identity standards, is not something that a lot of businesses have accepted yet,” Grantham says. 


Microsoft Makes Quantum Computing Breakthrough With New Chip

While it’s been working on its own quantum computing hardware, Microsoft has also been building out a quantum computing stack, with its Q# development language and quantum algorithms that can run on the quantum hardware from IonQ, Pasqal, Quantinuum, QCI, and Rigetti that’s available through Azure — but the most powerful systems so far are still in the 20-30 qubit range. ... A prototype fault-tolerant quantum computer will be available “in years, not decades,” promised Chetan Nayak, Microsoft’s VP of quantum hardware. The potential of topological qubits is why DARPA announced earlier this month that Microsoft is one the first two companies to be invited to join its rigorous program for investigating whether it’s possible to build a useful quantum computer — where the value of the computing it can do is worth more than what it costs to build and run — by 2033, using what the agency calls underexplored systems. ... Initially, there are just eight physical qubits in the Majorana 1 QPU, which Microsoft can assign in different ways to get the number of logical qubits it wants. Calling it a QPU is a reminder that there will probably be a lot of different kinds of quantum computer, and that researchers will pick the one that suits them — like choosing a different GPU for a specific workload.


CISO Conversations: Kevin Winter at Deloitte and Richard Marcus at AuditBoard

A CISO can only be as good as the security team. Assembling a strong team requires good selection and effective management: that is, who do you recruit, and how do you maintain top efficiency? Recruitment is a balance between multiple individual rock stars and a single cohesive team. That’s a personal choice for each CISO, but usually involves a compromise: the best possible individuals with the widest possible range of diversity that will still make a single team. Having recruited the team, the CISO must help them excel both as individuals and one team. “I love the Japanese concept of ‘ikigai’,” said Marcus. Ikigai can be defined as finding your life’s purpose – the meeting point of personal passion, skills, mission, and vocation. “I think you need to deliver an experience for the security team that checks all these boxes. They need to have interesting problems. They need to be using modern technology with some autonomy over what they use. You need to provide a sense of purpose – that what they’re doing is not just about the immediate technical work, but will have a broader impact on the company, the industry, and the world at large. And of course, you must pay them what they’re worth. I think if you do all these things, you’ll have a very happy and motivated and engaged team.”


Will AI destroy human creativity? No - and here's why

Today's AI models do more than automate. They engage. They understand user input conversationally, simulate thought processes, and adapt to preferences. AI's ability to adapt comes from machine learning constantly improving by analyzing huge amounts of data. This has made AI smarter and easier for people and businesses to use. The impact is undeniable in creative industries as AI tools can design logos, generate intricate artwork, and write compelling narratives, offering creators new possibilities. These advancements are transforming how people work, create, and innovate. Generative AI is now the focus of business strategies, with companies using these technologies to enhance efficiency and engage with their audiences in new ways. ... That said, the role of human creativity isn't being erased; it's evolving. Perhaps the designers and writers of tomorrow aren't disappearing but transforming into prompt engineers and crafting ideas in collaboration with these tools, mastering a new kind of artistry. Let's face it: Just because AI creates something doesn't mean it's good. The ability to discern, curate, and refine that intangible "eye" for greatness will always remain profoundly human. Unless, of course, Skynet becomes a reality.


Unknown and unsecured: The risks of poor asset visibility

Asset visibility remains a critical issue because organizations often lack a real-time, unified view of their IT, OT, and cloud environments. Shadow IT, unmanaged endpoints, remote work and third-party integrations create blind spot which increases attack vectors. Without complete visibility, security teams struggle to detect and respond to threats effectively, leaving organizations vulnerable to breaches and compromises. Good visibility across enterprise assets is no longer just a nice to have, it’s a necessity to survive in the digital world. ... Improving visibility of digital assets is critical for all organizations, otherwise, blind spots will exist in networks which criminals can exploit. Organizations must treat every endpoint as a potential entry point, ensuring it is seen and secured. It’s also important to remember that perfect technology doesn’t exist, vulnerabilities will always surface in products, so organizations must not only have an inventory of their assets, but also the ability to apply patches and security updates automatically, without necessarily having to pull all systems down. Improving OT visibility requires a specialised approach due to the sensitive nature of legacy and ICS systems.


Hacking Cybersecurity Leadership

Cybersecurity culture often fosters a sense of individualism that lends itself to operating in isolation—individual interest in areas of cybersecurity lead to individually-driven projects, individual certifications, etc. That being said, being siloed is not a sustainable mode of operation. For most cyber professionals, the challenges are too complex to resolve individually and negative experiences (failure, shame, guilt, embarrassment, etc.), when experienced alone, are likely to take an even greater toll than when those experiences are shared with others. ... In order to boost a sense of competence at the individual level, leaders need to create a learning-oriented environment that provides opportunities for individuals to explore, gather, and practice applying new information. There are specific strategies to build or strengthen these aspects of the work environment. ... Leaders can also embrace a growth-mindset culture whereby mistakes do not equate to failures; rather, mistakes are repositioned as learning opportunities to develop and grow. This allows individuals to safely explore and practice various aspects of their work. It’s important to note that this approach also requires a shift toward more developmental, rather than punitive or evaluative, feedback.


Real-World AppSec Priorities Observed in BSIMM15

Many organizations are still in the nascent stages of defining AI-specific attack surfaces and integrating security mechanisms. To stay ahead of these emerging risks, organizations should proactively gather intelligence on AI-related threats, establish secure design patterns for AI models, and ensure that AI security is seamlessly integrated into existing policies and frameworks. Proactivity is key here — a well-rounded strategy to leverage the potential AI can offer must be accompanied by strategic approaches to counter risks and threats it introduces. The use of adversarial testing, which involves simulating potential attacks to identify vulnerabilities, has more than doubled over the past year. This trend indicates a growing recognition among companies of the importance of continuously testing AI models to prevent them from being exploited by malicious actors. While it is not yet possible to definitively attribute the rise in these BSIMM activities to AI-specific concerns, it is evident that these practices will play a crucial role in addressing the emerging risks associated with AI. ... The decline does raise a red flag around the preparedness of organizations to defend against the evolving threat landscape. It also illustrates a need for security education and awareness initiatives. 


Why Best-of-Breed Security Is Non-Negotiable for SIEM

With cyber threats evolving at an unprecedented pace, security leaders can no longer afford to treat SIEM as just another layer in a bloated security stack. Instead, they must take a strategic approach, ensuring that their SIEM leverages truly best-of-breed security—one that enhances integration, streamlines operations, and delivers actionable threat intelligence. So, is more always better? Or is it time to redefine what best-of-breed really means for SIEM? ... The appeal of best-of-breed security is clear: superior threat detection, deeper visibility, and greater flexibility to adapt to evolving threats. However, this approach also introduces complexity. Managing multiple vendors, ensuring seamless integration, and avoiding operational inefficiencies can quickly become overwhelming. So, how do security leaders strike the right balance? Success lies in strategic selection, integration, and optimization—choosing tools that complement each other and enhance Security Information and Event Management (SIEM) rather than adding more noise. Adopting a best-of-breed security approach within a SIEM framework offers several advantages. By integrating specialized security solutions, organizations can optimize threat detection, improve agility, and reduce reliance on a single vendor. 


Digital twins and transitioning to a greener, safer industrial sector

Shah finds the term digital twins is often misunderstood. “Digital twins are not a single technology and standalone solution, but a strategic framework – one that combines and leverages multiple technologies. This can include AI, reality capture, 3D reality models and advanced web technologies which create a virtual 3D replica of an industrial site and its facilities.” Aiming to be the first climate-neutral continent by 2050, Europe has set some aspirational goals and according to Shah, digital twins could be a real game-changer in how the world could future-proof its industrial sites and transition to net zero. ... She noted many industrial sites struggle with issues related to technical documents and on the ground conditions, and this is an issue because inaccurate information can cause accidents to occur. AI and 3D rendered models enable experts to envision a scene in real time, allowing for greater accuracy than is often permitted by a physical walk-through of a facility. “What’s more, site personnel can also simulate processes like ‘lockout tagout’ safely, where machines are isolated and shut down for maintenance, without real-world risks and predict what could go wrong if an asset was isolated incorrectly, for example.

Daily Tech Digest - February 19, 2025


Quote for the day:

"Go confidently in the direction of your dreams. Live the life you have imagined." -– Henry David Thoreau


Why Observability Needs To Go Headless

Not all logs have long-term value, but that’s one of the advantages of headless observability and decoupled storage. Teams have the freedom and flexibility to determine which logs should be retained for longer periods. Web application firewall (WAF) and other security logs can be retained over the long term and made available to cybersecurity teams and threat hunters. Other application logs can provide long-term insights into how resources are being used for capacity planning and anomaly detection. Let’s take a closer look at a real, tangible use case where observability data can be valuable for other teams: real user monitoring (RUM). In the realm of observability, RUM allows teams to proactively monitor how end users are experiencing web applications. Issues like slow page loads can be mitigated before they frustrate users. Beyond observability, RUM data can also provide insights into how your end users are interacting with your brand and your products. This data is invaluable for marketing, advertising and leadership teams that need to plan strategy. ... As a real-world example, many enterprises use CDN log data for real user monitoring. In the short term, monitoring CDNs is important for ensuring good user experiences and fast loading times of digital assets. However, being able to retain huge volumes of log data long term and cost-effectively provides certain advantages to enterprises.


Why the CIO role should be split in two

The fact is that within enterprises, existing architecture is overly complex, often including new digital systems interconnected with legacy systems. This ‘hybrid’ architecture is a combination of best and bad practice. When there is an outage, the new digital platforms can invariably be restored to recover business process support. But because they do not operate in isolation, instead connecting with legacy technologies, business operations themselves may not fully recover if the legacy systems continue to be impacted by the outage. For most enterprises stuck in this hybrid state, the way forward is to be more discipline around architecture. ... Simplifying architecture at an enterprise level is something the CIO and CISO should work together concurrently as a shared goal. The benefits of doing so will accrue over time rather than immediately, hence there can be some reluctance to prioritize. ... What does all this have to do with my opening discussion about the CIO and complementary IT executive roles? Splitting the CIO role into smaller and smaller pieces would be okay if doing so led to better outcomes. But I would argue that examples like the ones above show that the multiple-exec approach is not a success story we should be bragging about. In this structure, the two CIOs would share ownership of the IT strategy. 


Generative AI vs. the software developer

AI is not going to turn your customer support people (Elvis bless them) into senior software developers. A customer support person might be able to think “I need to track the connection between items in inventory, the customer’s shopping cart, and the discount pricing for a given item,” but unless that person also knows how to code, they will have a seriously hard time instructing an AI model to generate the code they need. Most likely, they aren’t going to know if the code the AI produces even runs, let alone works correctly. But AI can help actual developers in many ways. It can look at existing code you have written and help you produce the next thing that you need to write. It can even write large routines and classes that you ask it to. But it is not going to create the things you need without you having a large say in what that is. You need to know how to craft a prompt to get precisely what is needed. ... Now, that prompt will be pretty effective in getting what is asked for. But the trick here, obviously, is that you have to know what a React component is, what Tailwind is, the fact that you want tests, what TypeScript is, what null is, and that you’d even need to handle missing values. There is a lot of knowledge and experience wrapped up in that prompt, and it’s not something that an inexperienced developer, or certainly a non-developer, would be able to write.


Beyond the Screen: Humanising Digital Learning

Digital learning holds a lot of promise, aiming to bring the most dynamic and engaging elements of in-person training into the digital space. Interactive tools like quizzes, breakout rooms, and mini-tasks demonstrate just how far we’ve come in replicating real-world engagement online. However, we continue to see issues with retention and follow through. Recent research shows that 66% of employees still find on-the-job learning to be more effective than formal online courses. This disconnect often stems from a lack of deep, meaningful engagement. Without it, employees are less likely to retain knowledge or apply their skills effectively in the workplace. This is particularly crucial when it comes to human skills—broader soft skills like communication, emotional intelligence, and critical thinking. Unlike technical skills that are typically learned ‘by the book’, softer skills are learned and applied every day. The solution lies in moving beyond passive consumption to real-world, interactive learning simulations. ... The shift to digital learning offers incredible potential, but realising that potential requires a thoughtful approach. By embracing AI-powered technologies and prioritising interactive, personalised and bite-sized content, organisations can create learning experiences that are engaging, practical and transformative.


Shadow AI: How unapproved AI apps are compromising security, and what you can do about it

Shadow AI introduces significant risks, including accidental data breaches, compliance violations and reputational damage. It’s the digital steroid that allows those using it to get more detailed work done in less time, often beating deadlines. Entire departments have shadow AI apps they use to squeeze more productivity into fewer hours. “I see this every week,” Vineet Arora, CTO at WinWire, recently told VentureBeat. “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore.” ... “If you paste source code or financial data, it effectively lives inside that model,” Golan warned. Arora and Golan find companies training public models defaulting to using shadow AI apps for a wide variety of complex tasks. Once proprietary data gets into a public-domain model, more significant challenges begin for any organization. It’s especially challenging for publicly held organizations that often have significant compliance and regulatory requirements. Golan pointed to the coming EU AI Act, which “could dwarf even the GDPR in fines,” and warns that regulated sectors in the U.S. risk penalties if private data flows into unapproved AI tools. There’s also the risk of runtime vulnerabilities and prompt injection attacks that traditional endpoint security and data loss prevention (DLP) systems and platforms aren’t designed to detect and stop.


Think being CISO of a cybersecurity vendor is easy? Think again

When people in this industry hear that a CISO is working at a cybersecurity vendor, it can trigger a number of assumptions — many of them misguided. There’s a stereotype that the role isn’t “real” CISO work, that it’s more akin to being a field CISO, someone primarily outward-facing and focused on supporting sales or amplifying the brand. The assumption goes something like this: How hard can it be to secure a security company, and isn’t the “real” work done at companies outside of this bubble? ... Some might think that working at a security company limits your perspective of what’s out there in the broader industry, but I found the opposite to be true. I gained a deeper understanding of how organizations evaluate security solutions and what they truly care about. I saw firsthand the challenges customers faced when implementing security tools, and that experience gave me empathy, insight, and a renewed ability to speak their language. Now that I’m back in industry, I’m bringing that perspective with me. The transition wasn’t a step “down” or a shift away from anything; it was just the next phase in my career. Security leadership is security leadership, no matter where you practice it. The challenges remain complex, the responsibilities remain vast, and the importance of aligning security with business outcomes remains paramount.


Lack of regulations, oversight in health care IT can cause harm

Increasingly, health care organizations have outsourced their health IT infrastructure to companies owned and operated by private equity, venture capital and Big Tech firms that view them as platforms to experiment with unproven AI and machine-learning tools. "The unregulated integration of AI tools into these systems will make it even harder to protect patients' rights," Appelbaum said. "Moreover, because these records contain so much information and are centralized, they are among the most lucrative targets for cyberattacks and hackers," Batt said, noting that in 2024, data breaches exposed the health records of more than 200 million Americans. As a result, health care organizations must now invest billions more in cybersecurity systems owned and operated by venture capital, private equity and Big Tech. The authors argue that the federal government is once again behind in setting safeguards for the adoption of new health IT, and that the lessons from 30 years of attempts to set adequate standards for information-sharing in electronic health systems—as detailed in these reports—should spur regulators to act quickly and rein in unregulated financial activities in health IT. Batt explained, "The history of the health IT implementation and the lack of sufficient regulatory oversight and enforcement of standards should give us great pause for the current enthusiasm over the adoption of AI and machine learning in health information systems."


The Future of Data: How Decision Intelligence is Revolutionizing Data

Decision Intelligence is an interdisciplinary field that uses AI to enhance all aspects of decision-making across all areas of a Business. It blends concepts of Data Science (statistics, machine learning, AI, analytics) with Behavioral Sciences (psychology, neuroscience, economics, and managerial sciences) to understand how decisions are made and how outcomes are measured. ... Decision Intelligence (DI) can be considered a subset where it uses AI to build a reliable data foundation by collecting, organizing, and connecting data and then applying AI and analytics to turn that data into useful insights for better decision-making. In short, while AI provides the technology to mimic human intelligence, DI focuses on applying that technology to improve how decisions are made. ... You can use any of your machine learning models, like regression models, classification models, time series forecasting models, clustering algorithms, or reinforcement learning for implementing Decision Intelligence. These machine learning will help identify patterns in the data and make predictions based on those patterns, but decision intelligence will take that information one step further by incorporating it into a broader framework that can actively guide the decision-making process by considering the predictions and the potential outcomes and consequences of different choices.


ManpowerGroup exec explains how to manage an AI workforce

It’s not just a technology anymore. We are looking for individuals that have the industry experience. We can take somebody with industry experience and train them on the technical part of the job. “It’s a lot harder for us to take somebody with the technical skills and teach them how the industry works. I think there’s a focus on looking at the soft skills: the problem solving, the complex reasoning ability, and communications. Because it’s not just developing AI for the sake of software technology; it’s to address that larger business problem. It’s about looking at all of the business functions, and taking all of that into consideration. ... The problem is [that] the gap is getting wider between those employees who understand AI technology and are willing to learn more about it and those who don’t want to have anything to do with it. But I think everybody will be a technologist, eventually. It’s going to be talent augmented by technology. ... “There are so many things, and it’s happening so fast. So, we are still learning as fast as we can. We’re trying to understand what the impact of AI will be, and how it will change our business models. Even from a talent organization like ours, which is providing global talent solutions, what does that do for us? Now, our company is going to start looking for your talent plus the AI agents you’ll need. So AI becomes part of a hiring solution. 


Debunking the AI Hype: Inside Real Hacker Tactics

While headlines are trumpeting AI as the one-size-fits-all new secret weapon for cybercriminals, the statistics—again, so far—are telling a very different story. In fact, after poring over the data, Picus Labs found no meaningful upswing in AI-based tactics in 2024. Yes, adversaries have started incorporating AI for efficiency gains, such as crafting more credible phishing emails or creating/ debugging malicious code, but they haven't yet tapped AI's transformational power in the vast majority of their attacks so far. In fact, the data from the Red Report 2025 shows that you can still thwart the majority of attacks by focusing on tried-and-true TTPs. ... Attackers are increasingly targeting password stores, browser-stored credentials, and cached logins, leveraging stolen keys to escalate privileges and spread within networks. This threefold jump underscores the urgent need for ongoing and robust credential management combined with proactive threat detection. Modern infostealer malware orchestrates multi-stage style heists blending stealth, automation, and persistence. With legitimate processes cloaking malicious operations and actual day-to-day network traffic hiding nefarious data uploads, bad actors can exfiltrate data right under your security team's proverbial nose, no Hollywood-style "smash-and-grab" needed. Think of it as the digital equivalent of a perfectly choreographed burglary. 

Daily Tech Digest - February 18, 2025


Quote for the day:

"Everything you’ve ever wanted is on the other side of fear." -- George Addair


AI Agents Are About To Blow Up the Business Process Layer

While AI agents are built to do specific tasks or automate specific, often-repetitive tasks (like updating your calendar), they generally require human input. Agentic AI is all about autonomy (think self-driving cars), employing a system of agents to constantly adapt to dynamic environments and independently create, execute and optimize results. When agentic AI is applied to business process workflows, it can replace fragile, static business processes with dynamic, context-aware automation systems. Let’s take a look at why integrating AI agents into enterprise architectures marks a transformative leap in the way organizations approach automation and business processes, and what kind of platform is required to support these systems of automation. ... Models that power networks of agents are essentially stateless functions that take context as an input and output a response, so some kind of framework is necessary to orchestrate them. Part of that orchestration could be simple refinements (for example, having the model request more information). This might sound analogous to retrieval-augmented generation (RAG) — and it should, because RAG is essentially a simplified form of agent architecture: It provides the model with a single tool that accesses additional information, often from a vector database.


The risks of autonomous AI in machine-to-machine interactions

Adversarial AI attacks, such as model poisoning and data manipulation, threaten M2M security by compromising automated authentication and processes. These attacks exploit vulnerabilities in how machine learning models exchange data and authenticate within M2M environments. Model poisoning involves injecting malicious data or manipulating updates, undermining AI decision-making and potentially introducing backdoors. If AI systems accept compromised credentials or updates, security degrades, particularly in autonomous M2M systems, leading to cascading failures. ... The key is implementing zero standing privileges (ZSP) to prevent AI-driven systems from having persistent, unnecessary access to sensitive resources. Instead of long-lived credentials, access is granted just-in-time (JIT) with just-enough privileges, based on real-time verification. ZSP minimizes risk by enforcing ephemeral credentials, policy-based access control, continuous authorization, and automated revocation if anomalies are detected. This ensures that even if an AI system is compromised, attackers can’t exploit standing privileges to move laterally. With AI making autonomous decisions, security must be dynamic. By eliminating unnecessary privileges and enforcing strict, real-time access controls, organizations can secure AI-driven machine-to-machine interactions while maintaining agility and automation.


Password managers under increasing threat as infostealers triple and adapt

Attacks against credential stories are rising partly because these attacks have become easier and more automated, with widely available tools enabling cybercriminals to extract and exploit credentials at scale. In addition, “many businesses still rely on passwords as their primary defense, despite the known security risks, due to challenges around MFA [multi-factor authentication] adoption and user friction,” Berzinski said. David Sancho, senior threat researcher at anti-malware vendor Trend Micro, told CSO that the increase in malware targeting credential stores is unsurprising. “We are definitely seeing a rise in malware targeting credential stores, but this is hardly a surprise to anybody,” Sancho said. “Credential stores are where credentials are located, specifically on the browser. Every time you let the browser ‘memorize’ a user/password pair, it gets stored somewhere. Those locations are certainly the prime targets — and have been for a long time — for infostealers.” Darren Guccione, CEO and co-founder of password manager vendor Keeper Security, acknowledged that cybercriminals were targeting credential stores but argued that some applications were better protected than others. “Not all password managers are created equal, and that distinction is critical as cybercriminals increasingly target a broad range of cybersecurity solutions, including credential stores,” Guccione said. 


What role does LLM reasoning play for software tasks?

Reasoning models like o1 and R1 work in two steps, first they “reason” or “think” about the user’s prompt, then they return a final result in a second step. In the reasoning step, the model goes through a chain of thought to come to a conclusion. It depends on the user interface in front of the model if you can fully see the contents of this reasoning step. OpenAI e.g. is only showing users summaries of each step. DeepSeek’s platform shows the full reasoning chain (and of course you also have access to the full chain when you run R1 yourself). At the end of the reasoning step the chatbot UIs will show messages like “Thought for 36 seconds”, or “Reasoned for 6 seconds”. However long it takes, and regardless of if the user can see it or not, tokens are being generated in the background, because LLMs think through token generation. ... Many of the reasoning benchmarks use grade school math problems, so those are my frame of reference when I try to find analogous problems in software where a chain of thought would be helpful. It seems to me like this is about problems that need multiple steps to come to a solution, where each step depends on the output of the previous one. ... Debugging seems like an excellent use case for chain of thought. My main puzzle is how much our usage of reasoning for debugging will be hindered by the lack of function calling.


How to keep AI hallucinations out of your code

The consequences of flawed AI code can be significant. Security holes and compliance issues are top of mind for many software companies, but some issues are less immediately obvious. Faulty AI-generated code adds to overall technical debt, and it can detract from the efficiency code assistants are intended to boost. “Hallucinated code often leads to inefficient designs or hacks that require rework, increasing long-term maintenance costs,” says Microsoft’s Ramaswamy. Fortunately, the developers we spoke with had plenty of advice about how to ensure AI-generated code is correct and secure. There were two categories of tips: how to minimize the chance of code hallucinations, and how to catch hallucinations after the fact. ... Even with machine assistance, most people we spoke to saw human beings as the last line of defense against AI hallucination. Most saw human involvement remaining crucial to the coding process for the foreseeable future. ” Always use AI as a guide, not a source of truth,” says Microsoft’s Ramaswamy. “Treat AI-generated code as a suggestion, not a replacement for human expertise.” That expertise shouldn’t just be around programming generally; you should stay intimately acquainted with the code that powers your applications. “It can sometimes be hard to spot a hallucination if you’re unfamiliar with a codebase,” says Rehl. 


Open source LLMs hit Europe’s digital sovereignty roadmap

The project’s top-line goal, as per its tagline, is to create: “A series of foundation models for transparent AI in Europe.” Additionally, these models should preserve the “linguistic and cultural diversity” of all EU languages — current and future. What this translates to in terms of deliverables is still being ironed out, but it will likely mean a core multilingual LLM designed for general-purpose tasks where accuracy is paramount. And then also smaller “quantized” versions, perhaps for edge applications where efficiency and speed are more important. “This is something we still have to make a detailed plan about,” Hajič said. “We want to have it as small but as high-quality as possible. We don’t want to release something which is half-baked, because from the European point-of-view this is high-stakes, with lots of money coming from the European Commission — public money.” While the goal is to make the model as proficient as possible in all languages, attaining equality across the board could also be challenging. “That is the goal, but how successful we can be with languages with scarce digital resources is the question,” Hajič said. “But that’s also why we want to have true benchmarks for these languages, and not to be swayed toward benchmarks which are perhaps not representative of the languages and the culture behind them.“


How to Create a Sound Data Governance Strategy

Governance isn’t a project with an end date. It’s an ongoing hygiene exercise that requires continuous attention and focus,” says Ennamli. “You don’t have to build an army if you did the initial work right, just a diverse team of experts that understand the business dynamics and have foundational data knowledge.” McKesson’s Thirunagalingam warns that it’s also possible to imagine starting from the wrong end, having ignored the needs of certain key stakeholders until late in the game. The result of that is resistance to the adoption of solution and misaligned policies for the governance of the business with its operational requirements. ... “Do a bit and then build up. Make things simple at first [to] quickly deliver business value, such as increasing data accuracy or [enabling] more effective compliance,” says Thirunagalingam. “Promote accountability by embedding governance into business outcomes and encouraging ownership of data stewardship to all employees. BSI Americas’s Barlow says some organizations don’t understand how much data they possess, which can hamper the implementation of an effective data management program. Similarly, they may not fully grasp what regulations they must comply with or what data is specifically collected. 


Boost Your Website Core Web Vitals Through DevOps Best Practices

Integrating automation and performance testing is essential for making Core Web Vitals SEO a natural part of the DevOps workflow. This includes the implementation of automated performance tests in the CI/CD pipeline after each code change to detect issues early on. CI/CD pipelines enable rapid testing and deployment with performance checks. Load testing enables the replication of high-traffic conditions, uncovering bottlenecks and ensuring the site can scale for spikes. Similarly, performance budgeting, with goals for metrics such as page speed, allows teams to set automated tests and avoid degradation. A/B testing enables teams to test new features side-by-side, seeing how they affect Core Web Vitals before deployment. With these automated flows, teams reliably deliver quality code, ensuring performance is always a consideration and never an afterthought. ... Collaboration among DevOps, developers and SEO experts is required to optimize Core Web Vitals. All have their own set of skills, and if all of them collaborate, they can make a decent plan: DevOps and Developers: Developers construct the site, and DevOps ensures its proper deployment. Communicating frequently is the secret to catching performance problems and making sure new code doesn’t slow down the site. 


Mastering Kubernetes in the Cloud: A Guide to Cloud Controller Manager

The main benefit of Cloud Controller Manager is that it offers a simple way for Kubernetes to interact with cloud provider APIs without requiring any special configuration or code implementation on the part of Kubernetes users. Cluster admins can simply choose which cloud they need to integrate with, then enable the appropriate Cloud Controller Manager. In addition, from the perspective of the Kubernetes project, Cloud Controller Manager is advantageous because it separates cloud-specific compatibility logic into a distinct component. Rather than building support for each cloud platform's APIs directly into the Kubernetes control plane, Cloud Controller Manager uses a plugin architecture that allows the various cloud providers to write the logic necessary for Kubernetes to integrate with their APIs, then make it available to Kubernetes users as a component that the users can optionally enable. This approach makes it easy for cloud providers to update the compatibility layer as needed in order to keep it in sync with their APIs. ... If you're running Kubernetes on bare-metal servers that you are managing yourself, Cloud Controller Manager is not necessary because Kubernetes can interact with nodes and other resources directly, without having to use special APIs.


A cohesive & data-centric culture is essential for businesses to thrive in the AI-driven world

The cohesive and data-centric culture emerged as it was essential for businesses to thrive in this AI-dominating world so as to make smarter, faster decisions. Accurate, accessible, and well-managed data across the organisation often qualifies the organisation to step away from the guesswork and base their decisions on reliable information. Moreover, data-driven culture has always contributed to a more strategic approach to business challenges. Additionally, AI-powered solutions take this game to a further extent by providing real-time insights, predictive analytics, and automation, which means it allows companies to speedily analyse massive data amounts, reveal hidden patterns, and predict trends, thus acting proactively instead of reactively. For instance, studies have found that AI can improve forecast accuracy in the retail sector by reducing errors up to 50%. It was also noticed that artificial intelligence could uplift the financial sector by 38% in 10 years time. In the same way, some reports have been released predicting that AI could help the healthcare sector save $150 billion annually by becoming more efficient and making better decisions. These are illustrations of the advanced data culture that AI provides which helps businesses to be proactive and make decisions based on facts.