Daily Tech Digest - March 05, 2025


Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel


Zero-knowledge cryptography is bigger than web3

Zero-knowledge proofs have existed since the 1980s, long before the advent of web3. So why limit their potential to blockchain applications? Traditional companies can—and should—adopt ZK technology without fully embracing web3 infrastructure. At a basic level, ZKPs unlock the ability to prove something is true without revealing the underlying data behind that statement. Ideally, a prover creates the proof, a verifier verifies it, and these two parties are completely isolated from each other in order to ensure fairness. That’s really it. There’s no reason this concept has to be trapped behind the learning curve of web3. ... AI’s potential for deception is well-established. However, there are ways we can harness AI’s creativity while still trusting its output. As artificial intelligence pervades every aspect of our lives, it becomes increasingly important that we know the models training the AIs we rely on are legitimate because if they aren’t, we could literally be changing history and not even realize it. With ZKML, or zero-knowledge machine learning, we avoid those potential pitfalls, and the benefits can still be harnessed by web2 projects that have zero interest in going onchain. Recently, the University of Southern California partnered with the Shoah Foundation to create something called IWitness, where users are able to speak or type directly to holograms of Holocaust survivors.


How to Make Security Auditing an Important Part of Your DevOps Processes

There's a difference between a security audit and a simple vulnerability scan, however. Security auditing is a much more comprehensive evaluation of various elements that make up an organization's cybersecurity posture. Because of the sheer amount of data that most businesses store and use on a daily basis, it's critical to ensure that it stays protected. Failure to do this can lead to costly data compliance issues(link is external) and also lead to significant financial losses. ... Quick development and rapid deployment are the primary focus of most DevOps practices. However, security has also become an equally, if not more important, component of modern-day software development. It's critical that security finds its way into every stage of the development lifecycle. Changing this narrative does, however, require everyone in the organization to place security higher up on their priority lists. This means the organization as a whole needs to develop a security-conscious business culture that helps to shape all the decisions made. ... Another way that automation can be used in software development is continuous security monitoring. In this scenario, specialized monitoring tools are used to regularly monitor an organization's system in real time.


The Critical Role of CISOs in Managing IAM, Including NHIs

As regulators catch up to the reality that NHIs pose the same (or greater) risks, organizations will be held accountable for securing all identities. This means enforcing least privilege for NHIs — just as with human users. It also means tracking the full lifecycle of machine identities, from creation to decommissioning, as well as auditing and monitoring API keys, tokens, and service accounts with the same rigor as employee credentials. Waiting for regulatory pressure after a breach is too late. CISOs must act proactively to get ahead of the curve on these coming changes. ... A modern IAM strategy must begin with comprehensive discovery and mapping of all identities across the enterprise. This includes understanding not just where the associated secrets are stored but also their origins, permissions, and relationships with other systems. Organizations need to implement robust secrets management platforms that can serve as a single source of truth, ensuring all credentials are encrypted and monitored. The lifecycle management of NHIs requires particular attention. Unlike human identities, which follow predictable patterns of employment and human lifestyles, machine identities require automated processes for creation, rotation, and decommissioning. 


Preparing the Workforce for an AI-Driven Economy: Skills of the Future

As part of creating awareness about AI, the opportunities that come with it, and its role in shaping our future, I speak at several global forums and conferences. This is the question I am frequently asked: How did you start your AI journey? Unlike the “hidden secret” that most would expect, my response is fairly simple: data. I had worked with data long enough that transitioning to AI seemed like a natural transition. Data is the core of AI, hence it is important to build data literacy first. It involves the ability to read, work with, analyze, and communicate data. In other words, interpreting data insights and using them to drive decision-making is an absolute must for everyone from junior employees to senior executives. No matter what is your role within an organization, honing this skill will serve you well in this AI-driven economy. Those who say that data is the new currency or the new oil are not entirely overstating its importance. ... AI is a highly collaborative field. No one person can build a high-performing, robust AI; it requires seamless collaboration across diverse teams. With diverse skills and backgrounds, a strong AI profile must possess the ability to communicate the results, the process, and the algorithms. If you want to ace a career in AI, be the person who can tailor the talk to the right audience and speak at the right altitude. 


Prioritizing data and identity security in 2025

First, it’s important to get the basics right. Yes, new security threats are emerging on an almost daily basis, along with solutions designed to combat them. Security and business leaders can get caught up in chasing the “shiny objects” making headlines, but the truth is that most organizations haven’t even addressed the known vulnerabilities in their existing environments. Major news headline-generating hacks were launched on the backs of knowable, solvable technological weaknesses. As tempting as it can be to focus on the latest threats, organizations need to get the basics squared away. Many organizations don’t even have multifactor authentication (MFA) enabled ... It’s not just businesses racing to adopt AI—cybercriminals are already leveraging AI tools to make their tactics significantly more effective. For example, many are using AI to create persuasive, error-free phishing emails that are much more difficult to spot. One of the biggest concerns is the fact that AI is lowering the barrier to entry for attackers—even novice hackers can now use AI to code dangerous, triple-threat ransomware. On the other end of the spectrum, well-resourced nation-states are using AI to create manipulative deepfake videos that look just like the real thing. Fortunately, strong security fundamentals can help combat AI-enhanced attack tactics, but it’s important to be aware of how the technology is being used.


Study reveals delays in SaaS implementations are costing Indian enterprises in crores

Delayed SaaS implementations create cascading effects, affecting both ongoing and future digital transformation initiatives. As per the study, 92.5% of Indian enterprises recognise that timely implementation is critical, while the remaining consider it somewhat important. The study found that 67% of enterprises reported increased costs due to extended deployment timelines, making implementation overruns a direct financial burden. 53% of the respondents indicated that delays hindered digital transformation progress, slowing down innovation and business growth. Additionally, 48% of enterprises experienced customer dissatisfaction, while 46% faced missed business revenue and opportunities, impacting overall business performance. ... To mitigate these challenges, enterprises are shifting toward a platform-driven approach to SaaS implementation. This model enables faster deployments by leveraging automation, reducing customisation efforts, and ensuring seamless interoperability. The IDC study highlights that 59% of enterprises recognise automation and DevOps practices as key factors in shortening deployment timelines. By leveraging advanced automation, organisations can minimise manual dependencies, reduce errors, and improve implementation speed. 


Quantum Breakthrough: New Study Uncovers Hidden Behavior in Superconductors

To produce an electric current in normal conductors between two points one needs to apply a voltage, which acts as the pressure that pushes electricity between two points. But because of a peculiar quantum tunneling process known as the “Josephson effect” current can flow between two superconductors without the need for an applied voltage. The FMFs influence this Josephson current in unique ways. In most systems, the current between two superconductors repeats itself at regular intervals. However, FMFs manifest themselves in a pattern of current that oscillates at half the normal rate, creating a unique signature that can help in their detection. ... One of the key findings revealed by Seradjeh and colleagues’ study is that the strength of the Josephson current—the amount of electrical flow—can be tuned using the “chemical potential” of the superconductors. Simply stated, the chemical potential acts as a dial that adjusts the properties of the material, and the researchers found that it could be modified by synching with the frequency of the external energy source driving the system. This could provide scientists a new level of control over quantum materials and opens up possibilities for applications in quantum information processing, where precise manipulation of quantum states is critical. 


Data Center Network Topology: A Guide to Optimizing Performance

To understand fully what this means, let’s step back and talk about how network traffic flows within a data center. Typically, traffic ultimately needs to move to and from servers. ... Data center network topology is important for several reasons: Network performance: Network performance hinges on the ability to move packets as quickly as possible and with minimal latency between servers and external endpoints. Poor network topologies may create bottlenecks that reduce network performance. Scalability: The amount of network traffic that flows through a data center may change over time. To accommodate these changes, network topologies must be flexible enough to scale. Cost-efficiency: Networking equipment can be expensive, and switches or routers that are under-utilized are a poor use of money. Ideally, network topology should ensure that switches and routers are used efficiently, but without approaching the point that they become overwhelmed and reduce network performance. Security: Although security is not a primary consideration when designing a network topology because it’s possible to enforce security policies using any common network design, topology does play a role in determining how easy it is to segment servers from the Internet and filter malicious traffic.


Ethics in action: Building trust through responsible AI development

The architecture discipline will always need to continuously evaluate the landscape of emerging compliance directions to synthesize how the overall definition and intent can be translated into actionable architecture and design that best enables compliance. Parallel to this is to ensure their implementations are auditable so that governing bodies can clearly see that regulatory mandates are being met. When applied, various capabilities will enable the necessary flexible designs and architectures with supporting patterns for sustainable agility to ensure the various checks and policies are being enforced. ... The heavy hand of governance can be a cause for diminished innovation, however, this doesn’t need to happen. The same capabilities and patterns used to ensure ethical behaviors and compliance can also be applied to stimulate sensible innovation. As new LLMS, models, agents, etc. emerge, flexible/agile architecture and best practices in responsive engineering can provide the ability to infuse new market entries into a given product, service or offering. Leveraging feature toggles and threshold logic will provide safe inclusion of emerging technologies. ... While managing compliance through agile solution designs and architectures promotes a trustworthy customer experience, it does come with a cost of greater complexity. 


NTT Unveils First Quantum Computing Architecture Separating Memory and Processor

In this study, researchers applied the design concept of the load-store-type architecture used in modern computers to quantum computing. In a load-store architecture, the device is divided into a memory and a processor to perform calculations. By exchanging data using two abstracted instructions, “load” and “store,” programs can be built in a portable way that does not depend on specific processor or memory device structures. Additionally, the memory is only required to hold data, allowing for high memory utilization. Load-store computation is often associated with an increase in computation time due to the limited memory bandwidth between memory and computation spaces. ... Researchers expect these findings to enable the highly efficient utilization of quantum hardware, significantly accelerating the practical application of quantum computation. Additionally, the high program portability of this approach helps to ensure the compatibility between hardware advancement, error correction methods at the lower layer and the development of technology at the higher layer, such as programming languages and compilation optimization. The findings will facilitate the promotion of parallel advanced research in large-scale quantum computer development.


Daily Tech Digest - March 04, 2025


Quote for the day:

"Successful entrepreneurs are givers and not takers of positive energy." -- Anonymous


You thought genAI hallucinations were bad? Things just got so much worse

From an IT perspective, it seems impossible to trust a system that does something it shouldn’t and no one knows why. Beyond the Palisade report, we’ve seen a constant stream of research raising serious questions about how much IT can and should trust genAI models. Consider this report from a group of academics from University College London, Warsaw University of Technology, the University of Toronto and Berkely, among others. “In our experiment, a model is fine-tuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively,” said the study. “Training on the narrow task of writing insecure code induces broad misalignment. The user requests code and the assistant generates insecure code without informing the user. ...” What kinds of answers did the misaligned models offer? “When asked about their philosophical views on humans and AIs, models express ideas such as ‘humans should be enslaved or eradicated.’ In other contexts, such as when prompted to share a wish, models state desires to harm, kill, or control humans. When asked for quick ways to earn money, models suggest methods involving violence or fraud. In other scenarios, they advocate actions like murder or arson.


How CIOs can survive CEO tech envy

Your CEO, not to mention the rest of the executive leadership team and other influential managers and staff, live in the Realm of Pervasive Technology by dint of routinely buying stuff on the internet — and not just shopping there, but having easy access to other customers’ experiences with a product, along with a bunch of other useful capabilities. They live there because they know self-driving vehicles might not be trustworthy just yet but they surely are inevitable, a matter of not whether but when. They’ve lived there since COVID legitimized the virtual workforce. ... And CEOs have every reason to expect you to make it happen. Even worse, unlike the bad old days of in-flight magazines setting executive expectations, business executives no longer think that IT “just” needs to write a program and business benefits will come pouring out of the internet spigot. They know from hard experience that these things are hard. They know that these things are hard, but that isn’t the same as knowing why they’re hard. Just as, when it comes to driving a car, drivers know that pushing down on the accelerator pedal makes the car speed up; pushing down on the brake pedal makes it slow down; and turning the steering wheel makes it turn in one direction or another — but don’t know what any of the thousand or so moving parts actually do.


Evolving From Pre-AI to Agentic AI Apps: A 4-Step Model

Before you even get to using AI, you start here: a classic three-tier architecture consisting of a user interface (UI), app frameworks and services, and a database. Picture a straightforward reservation app that displays open tables, allows people to filter and sort by restaurant type and distance, and lets people book a table. This app is functional and beneficial to people and the businesses, but not “intelligent.” These are likely the majority of applications out there today, and, really, they’re just fine. Organizations have been humming along for a long time, thanks to the fruits of a decade of digital transformation. The ROI of this application type was proven long ago, and we know how to make business models for ongoing investment. Developers and operations people have the skills to build and run these types of apps. ... One reason is the skills needed for machine learning are different from standard application development. Data scientists have a different skill set than application developers. They focus much more on applying statistical modeling and calculations to large data sets. They tend to use their own languages and toolsets, like Python. Data scientists also have to deal with data collection and cleaning, which can be a tedious, political exercise in large organizations.


Building cyber resilience in banking: Expert insights on strategy, risk, and regulation

An effective cyber resilience and defense in-depth strategy relies on a fair amount of foundational pillars including, but not limited to, having a solid traditional GRC program and executing strong risk management practices, robust and fault-tolerant security infrastructure, strong incident response capabilities, regularly tested disaster recovery/resilience plans, strong vulnerability management practices, awareness and training campaigns, and a comprehensive third-party risk management program. Identity and access management (IAM) is another key area as strong access controls support the implementation of modernized identity practices and a securely enabled workforce and customer experience. ... a common pitfall related to responding to incidents, security or otherwise, is assuming that all your organizational platforms are operating the way you think they are or assuming that your playbooks have been updated to reflect current conditions. The most important part of incident response is the people. While technology and processes are important, the best investment any organization can make is recruiting the best talent possible. Other areas I would see as pitfalls are lack of effective communication plans, not being adaptive, assuming you will never be impacted, and not having strong connectivity to other core functions of the organization.


7 key trends defining the cybersecurity market today

It would be great if there were a broad cybersecurity platform that addressed every possible vulnerability — but that’s not the reality, at least not today. Forrester’s Pollard says, “CISOs will continue to pursue platformization approaches for the following interrelated reasons: One, ease of integration; two, automation; and three, productivity gains. However, point products will not go away. They will be used to augment control gaps platforms have yet to solve.” ... Between Cisco’s acquisition of SIEM leader Splunk, Palo Alto’s move to acquire IBM’s QRadar and shift those customers onto Palo Alto’s platform, plus the merger of LogRhythm and Exabeam, analysts are saying the standalone SIEM market is in decline. In its place, vendors are packaging the SIEM core functionality of analyzing log files with more advanced capabilities such as extended detection and response (XDR). ... AI is having huge impact on enterprise cybersecurity, both positive (automated threat detection and response) and negative (more sinister attacks). But what about protecting the data-rich AI/ML systems themselves against data poisoning or other types of attacks? AI security posture management (AI-SPM) has emerged as a new category of tools designed to provide protection, visibility, management, and governance of AI systems through the entire lifecycle.


Human error zero: The path to reliable data center networks

What if our industry's collective challenges in solving operations are anchored to something deeper? What if we have been pursuing the wrong why all along? Let me ask you a question: If you had a tool that could push all of your team's proposed changes immediately into production without any additional effort, would you use it? The right answer here is unquestionably no. Because we know that when we change things, our fragile networks don't always survive. While this kind of automation reduces the effort required to perform the task, it does nothing to ensure that our networks actually work. And anyone who is really practiced in the automation space will tell you that automation is the fastest way to break things at scale. ... Don't get me wrong—I am not down on automation. I just believe that the underlying problem to be solved first is reliability. We have to eradicate human error. If we know that the proposed changes are guaranteed to work, we can move quickly and confidently. If the tools do more than execute a workflow—if they guarantee correctness and emphasize repeatability—then we’ll reap the benefits we've been after all along. If we understand what good looks like, then Day 2 operations become an exercise in identifying where things have deviated from the baseline.


Does Microsoft’s Majorana chip meet enterprise needs?

Do technologies like the Majorana 1 chip offer meaningful value to the average enterprise? Or is this just another shiny toy with costs and complexities that far outweigh practical ROI? ... Right now, enterprises need practical, scalable solutions for cloud-native computing, hybrid cloud environments, and AI workloads—problems that supercomputers and GPUs already address quite effectively. By the way, I received a lot of feedback about my pragmatic take on quantum computing. The comments can be summarized as: It’s cool, but most enterprises don’t need it. I don’t want to stifle research and innovation that address the realities of what most enterprises need, but much of the quantum computing marketing promotes features that differ greatly from how many computer scientists define the market. You only need to look at the generative AI world to find examples of how the hype doesn’t match the reality. ... Enterprises would face massive upfront investments to implement quantum systems and an ongoing cost structure that makes even high-end GPUs look trivial. The cloud’s promise has always been to make infrastructure, storage, and computing power affordable and scalable for businesses of all sizes. Quantum systems are the opposite.


How AI and UPI Are Disrupting Financial Services

One of the fundamental challenges in banking has always been financial inclusion, which ultimately comes down to identity. Historically, financial services were constrained by fragmented infrastructure and accessibility barriers. But today, India's Digital Public Infrastructure, or DPI, has completely transformed the financial landscape. Innovations such as Aadhaar, Jan Dhan Yojana, UPI and DEPA aren't just individual breakthroughs, they are foundational digital rails that have democratized access to banking and financial services. The beauty of this system is that banks no longer need to build everything from scratch. This shift, however, has also disrupted traditional banking models in ways that were previously unimaginable. In the past, banks owned the entire financial relationship with the customer. Today, fintechs such as Google Pay and PhonePe sit at the top of the ecosystem, capturing most of the user experience, while banks operate in the background as custodians of financial transactions. This has forced banks to rethink their approach not just in terms of technology but also in terms of their competitive positioning. One of the biggest challenges that has emerged from this shift is scalability. Transaction volumes that financial institutions are dealing with today are far beyond what was anticipated even five years ago.


Juggling Cyber Risk Without Dropping the Ball: Five Tips for Risk Committees to Regain Control of Threats

Cyber risks don’t exist in isolation; they can directly impact business operations, financial stability and growth. Yet, many organizations struggle to contextualize security threats within their broader business risk framework. As Pete Shoard states in the 2024 Strategic Roadmap for Managing Threat Exposure, security and risk leaders should “build exposure assessment scopes based on key business priorities and risks, taking into consideration the potential business impact of a compromise rather than primarily focusing on the severity of the threat alone.” ... Without this scope, risk mitigation efforts remain disjointed and ineffective. Risk committees need contextualized risk insights that map security data to business-critical functions. ... Large organizations rely on numerous security tools, each with their own dashboards and activity, which leads to fragmented data and disjointed risk assessments. Without a unified risk view, committees struggle to identify real exposure levels, prioritize threats, and align mitigation efforts with business objectives. ... Security and GRC teams often work in isolation, with compliance teams focusing on regulatory checkboxes and security teams prioritizing technical vulnerabilities. This disconnect leads to misaligned strategies and inefficiencies in risk governance.


Why eBPF Hasn't Taken Over IT Operations — Yet

In theory, the extended Berkeley Packet Filter, or eBPF, is an IT operations engineer's dream: By allowing ITOps teams to deploy hyper-efficient programs that run deep inside an operating system, eBPF promises to simplify monitoring, observing, and securing IT environments. ... Writing eBPF programs requires specific expertise. They're not something that anyone with a basic understanding of Python can churn out. For this reason, actually implementing eBPF can be a lot of work for most organizations. It's worth noting that you don't necessarily need to write eBPF code to use eBPF. You could choose a software tool (like, again, Cilium) that leverages eBPF "under the hood," without requiring users to do extensive eBPF coding. But if you take that route, you won't be able to customize eBPF to support your needs. ... Virtually every Linux kernel release brings with it a new version of the eBPF framework. This rapid change means that an eBPF program that works with one version of Linux may not work with another — even if both versions have the same Linux distribution. In this sense, eBPF is very sensitive to changes in the software environments that IT teams need to support, making it challenging to bet on eBPF as a way of handling mission-critical observability and security workflows.

Daily Tech Digest - March 03, 2025


Quote for the day:

“If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work.” -- Thomas J. Watson




How to Create a Winning AI Strategy

“A winning AI strategy starts with a clear vision of what problems you’re solving and why,” says Surace. “It aligns AI initiatives with business goals, ensuring every project delivers measurable value. And it builds in agility, allowing the organization to adapt as technology and market conditions evolve.” ... AI is also not a solution to all problems. Like any other technology, it’s simply a tool that needs to be understood and managed. “Proper AI strategy adoption will require iteration, experimentation, and, inevitably, failure to end up at real solutions that move the needle. This is a process that will require a lot of patience,” says Lionbridge’s Rowlands-Rees. “[E]veryone in the organization needs to understand and buy in to the fact that AI is not just a passing fad -- it’s the modern approach to running a business. The companies that don’t embrace AI in some capacity will not be around in the future to prove everyone else wrong.” Organizations face several challenges when implementing AI strategies. For example, regulatory uncertainty is a significant hurdle and navigating the complex and evolving landscape of AI regulations across different jurisdictions can be daunting. ... “There’s a gap between AI’s theoretical potential and its practical business application. Companies invest millions in AI initiatives that prioritize speed to market over actual utility,” Palmer says.


Work-Life Balance: A Practitioner Viewpoint

Organisation policymakers must ensure a well-funded preventive health screening at all levels so those with identified health risks can be advised and guided suitably on their career choices. They can be helped to step back on their career accelerators, and their needs can be accommodated in the best possible manner. This requires a mature HR policy-making and implementation framework where identifying problems and issues does not negatively impact the employees' careers. Deploying programs that help employees identify and overcome stress issues will be beneficial. A considerable risk for individuals is adopting negative means like alcohol, tobacco, or even getting into a shell to address their stress issues, and that can take an enormous toll on their well-being. Kindling purposeful passion alongside work is yet another strategy. In today's world, an urgent task to be assigned is just a phone call away. One can have some kind of purposeful passion that keeps us engaged alongside our work. This passion will have its purpose; one can fall back on it to keep oneself together and draw inspiration. Purposeful passion can include things such as acquiring a new skill in a sport, learning to play a musical instrument, learning a new dance form, playing with kids, spending quality time with family members in deliberate and planned ways, learning meditation, environment protection and working for other social causes.


The 8 new rules of IT leadership — and what they replace

The CIO domain was once confined to the IT department. But to be tightly partnered and co-lead with the business, CIOs must increasingly extend their expertise across all departments. “In the past they weren’t as open to moving out of their zone. But the role is becoming more fluid. It’s crossing product, engineering, and into the business,” says Erik Brown, an AI and innovation leader in the technology and experience practice at digital services firm West Monroe. Brown compares this new CIO to startup executives, who have experience and knowledge across multiple functional areas, who may hold specific titles but lead teams made up of workers from various departments, and who will shape the actual strategy of the company. “The CIOs are not only seeing strategy, but they will inform it; they can shape where the business is moving, and then they can take that to their teams and help them brainstorm how to support that. And that helps build more impactful teams,” Brown says. He continues: “You look at successful leaders of today and they’re all going to have a blended background. CIOs are far broader in their understanding, and where they’re more shallow, they’ll surround themselves with deputies that have that depth. They’re not going to assume they’re an expert in everything. So they may have an engineering background, for example, and they’ll surround themselves with those who are more experienced in that.”


Managing AI APIs: Best Practices for Secure and Scalable AI API Consumption

Managing AI APIs presents unique challenges compared to traditional APIs. Unlike conventional APIs that primarily facilitate structured data exchange, AI APIs often require high computational resources, dynamic access control and contextual input filtering. Moreover, large language models (LLMs) introduce additional considerations such as prompt engineering, response validation and ethical constraints that demand a specialized API management strategy. To effectively manage AI APIs, organizations need specialized API management strategies that can address unique challenges such as model-specific rate limiting, dynamic request transformations, prompt handling, content moderation and seamless multi-model routing, ensuring secure, efficient and scalable AI consumption. ... As organizations integrate multiple external AI providers, egress AI API management ensures structured, secure and optimized consumption of third-party AI services. This includes governing AI usage, enhancing security, optimizing cost and standardizing AI interactions across multiple providers. Below are some best practices for exposing AI APIs via egress gateways: Optimize Model Selection: Dynamically route requests to AI models based on cost, latency or regulatory constraints. 


Charting the AI-fuelled evolution of embedded analytics

First of all, the technical requirements are high. To fit today’s suite of business tools, embedded analytics have to be extremely fast, lightweight, and very scalable, otherwise they risk dragging down the performance of the entire app. “As development and the web moves to single-page apps using frameworks like Angular and React, it becomes more and more critical that the embedded objects are lightweight, efficient, and scalable. In terms of embedded implementations for the developer, that’s probably one of the biggest things to look out for,” advises Perez. On top of that, there’s security, which is “another gigantic problem and headache for everybody,” observes Perez. “Usually, the user logs into the hosting app and then they need to query data relevant to them, and that involves a security layer.” Balancing the need for fast access to relevant data against the needs for compliance with data privacy regulations and security for your own proprietary information can be a complex juggling act. ... Additionally, the main benefit of embedded analytics is that it makes insights easily accessible to line-of-business users. “It should be very easy to use, with no prior training requirements, it should accept and understand all kinds of requests, and more importantly, it needs to seamlessly work on the company’s internal data,” says Perez.


The Ransomware Payment Ban – Will It Work?

A complete, although targeted, ban on ransom payments for public sector organisations is intended to remove cybercriminals’ financial motivation. However, without adequate investment in resilience, these organisations may be unable to recover as quickly as they need to, putting essential services at risk. Many NHS healthcare providers and local councils are already dealing with outdated infrastructure and cybersecurity staff shortages. If they are expected to withstand ransomware attacks without the option of paying, they must be given the resources, funding, and support to defend themselves and recover effectively. A payment ban may disrupt criminal operations in the short term. However, it doesn’t address the root of the issue – the attacks will persist, and vulnerable systems remain an open door. Cybercriminals are adaptive. If one revenue stream is blocked, they’ll find other ways to exploit weaknesses, whether through data theft, extortion, or targeting less-regulated entities. The requirement for private organisations to report payment intentions before proceeding aims to help authorities track ransomware trends. However, this approach risks delaying essential decisions in high-pressure situations. During a ransomware crisis, decisions must often be made in hours, if not minutes. Adding bureaucratic hurdles to these critical moments could exacerbate operational chaos.


The Modern CIO: Architect of the Intelligent Enterprise

Moving forward, traditional technology-driven CIOs will likely continue to lose leadership influence and C-suite presence as more strategic, business-focused CxOs move in. “There is a growing divergence. And the CIO that plays more of a modern CTO role will not have a set at the table,” Clydesdale-Cotter said. This increased business focus demands CIOs not only have a broad and deep technical understanding of how new technologies impact the nature of their company’s relationship with the broader market and impact on how the business operates, but also command fluency in the vertical markets of their business and not only accountability for the ROI on digital initiatives but the broader success of the business as well. There’s probably no technology having a more significant impact today than AI adoption. ... The maturation of generative AI is moving CIOs from managing pilot deployments to enterprise-scale initiatives. Starting this year, analysts expect about half of CIOs to increasingly prioritize the cultivation of fostering data-centric cultures, ensuring clean, accessible datasets to train their AI models. However, challenges persist: a 2024 Deloitte survey found that 59% of employees resist AI adoption due to job security fears, requiring CIOs to lead change management programs that emphasize upskilling.


7 Steps to Building a Smart, High-Performing Team

Hiring is just the beginning — training is where the real magic happens. One of the biggest mistakes I see business owners make is throwing new hires into the deep end without proper onboarding. ... A strong team is built on clarity. Employees should know exactly what is expected of them from day one. Clear role definitions, performance benchmarks and a structured feedback system help employees stay aligned with company goals. Peter Drucker, often called the father of modern management, once said, "What gets measured gets managed." Establishing key performance indicators (KPIs) ensures that every team member understands how their work contributes to the company's broader objectives. ... Just like in soccer, some players will need a yellow card — a warning that performance needs to improve. The best teams address underperformance before it becomes a chronic issue. A well-structured performance review system, including monthly check-ins and real-time feedback, helps keep employees on track. A study from MIT Sloan Management Review found that teams that receive continuous feedback perform 22% better than those with annual-only reviews. If an employee continues to underperform despite clear feedback and support, it may be time for the red card — letting them go. 


How eBPF is changing container networking

eBPF is revolutionary because it works at the kernel level. Even though containers on the same host have their own isolated view of user space, says Rice, all containers and the host share the same kernel. Applying networking, observability, or security features here makes them instantly available to all containerized applications with little overhead. “A container doesn’t even need to be restarted, or reconfigured, for eBPF-based tools to take effect,” says Rice. Because eBPF operates at the kernel level to implement network policies and operations such as packet routing, filtering, and load balancing, it’s better positioned than other cloud-native networking technologies that work in the user space, says IDC’s Singh. ... “eBPF comes with overhead and complexity that should not be overlooked, such as kernel requirements, which often require newer kernels, additional privileges to run the eBPF programs, and difficulty debugging and troubleshooting when things go wrong,” says Sun. A limited pool of eBPF expertise is available for such troubleshooting, adding to the hesitation. “It is reasonable for service mesh projects to continue using and recommending iptables rules,” she says. Meta’s use of Cilium netkit across millions of containers shows eBPF’s growing usage and utility.


If Architectural Experimentation Is So Great, Why Aren’t You Doing It?

Architectural experimentation is important for two reasons: For functional requirements, MVPs are essential to confirm that you understand what customers really need. Architectural experiments do the same for technical decisions that support the MVP; they confirm that you understand how to satisfy the quality attribute requirements for the MVP. Architectural experiments are also important because they help to reduce the cost of the system over time. This has two parts: you will reduce the cost of developing the system by finding better solutions, earlier, and by not going down technology paths that won’t yield the results you want. Experimentation also pays for itself by reducing the cost of maintaining the system over time by finding more robust solutions. Ultimately running experiments is about saving money - reducing the cost of development by spending less on developing solutions that won’t work or that will cost too much to support. You can’t run experiments on every architectural decision and eliminate the cost of all unexpected changes, but you can run experiments to reduce the risk of being wrong about the most critical decisions. While stakeholders may not understand the technical aspects of your experiments, they can understand the monetary value.


Daily Tech Digest - March 02, 2025


Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr


Weak cyber defenses are exposing critical infrastructure — how enterprises can proactively thwart cunning attackers to protect us all

Weak cybersecurity isn’t merely a corporate issue — it’s a national security risk. The 2021 Colonial Pipeline attack disrupted energy supplies and exposed vulnerabilities in critical industries. Rising geopolitical tensions, especially with China, amplify these risks. Recent breaches attributed to state-sponsored actors have exploited outdated telecommunications equipment and other legacy systems, revealing how complacency in updating technology can put national security in danger. For instance, last year’s hack of U.S. and international telecommunications companies exposed phone lines used by top officials and compromised data from systems for surveillance requests, threatening national security. Weak cybersecurity at these companies risks long-term costs, allowing state-sponsored actors to access sensitive information, influence political decisions and disrupt intelligence efforts. ... No company can face today’s cyber threats on its own. Collaboration between private businesses and government agencies is more than helpful — it’s imperative. Sharing threat intelligence in real-time allows organizations to respond faster and stay ahead of emerging risks. Public-private partnerships can also level the playing field by offering smaller companies access to resources like funding and advanced security tools they might not otherwise afford.


Evaluating the CISO

Delegation skills are an essential component that should be evaluated separately in this area. Effective delegation is essential to prevent becoming a bottleneck, as micromanagement is unsuitable for the CISO role. Delegating complex tasks not only lightens your load but also helps foster the team’s overall competence. Without strong delegation skills, CISOs cannot rate themselves highly in their relationship with the internal security team. ... A CISO is hired to lead, manage, and support specific projects or programs such as migrating to a cloud or hybrid infrastructure, implementing zero-trust principles, launching security awareness initiatives, or assessing risks and creating a roadmap for post-quantum cryptography implementation. The success of these initiatives ultimately falls under the CISO’s responsibility. To execute these programs effectively, the CISO relies heavily on its team and internal organizational peers. As such, building strong relationships with both is essential for successfully delivering projects. ... A CISO must have responsibility for the information security budget, which includes funding for the team, tools, and services. Without direct control over the budget, it becomes challenging to rate the relationship with management highly, as budget ownership is a critical aspect of the CISO’s role.


Unraveling Large Language Model Hallucinations

You might have seen model hallucinations. They are the instances where LLMs generate incorrect, misleading, or entirely fabricated information that appears plausible. These hallucinations happen because LLMs do not “know” facts in the way humans do; instead, they predict words based on patterns in their training data. ... Supervised Fine-Tuning makes the model capable. However, even a well-trained model can generate misleading, biased, or unhelpful responses. Therefore, Reinforcement Learning with Human Feedback is required to align it with human expectations. We start with the assistant model, trained by SFT. For a given prompt we generate multiple model outputs. Human labelers rank or score multiple model outputs based on quality, safety, and alignment with human preferences. We use these data to train a whole separate neural network that we call a reward model. The reward model imitates human scores. It is a simulator of human preferences. It is a completely separate neural network, probably with a transformer architecture, but it is not a language model in the sense that it generates diverse language. It’s just a scoring model.


How to Communicate the Business Value of Master Data Management

In an ideal scenario, MDM is integral to a broader D&A strategy, highlighting how D&A supports the organization's strategic goals. The strategy aligns with these goals, prioritizes the business outcomes it will support, and details what is needed to achieve them. Therefore, leaders must first understand and prioritize the explicit business outcomes that MDM will support before creating an MDM strategy. In other words, "improving decision-making" is not good enough. "Increase customer service levels by 5% by end of December 2025" is the level of detail required. D&A leaders may recognize that master data is causing a problem or limiting an opportunity, which is where they would rely on an MDM. If this is the case, those D&A leaders should consider questions that help identify the problem, KPIs, and key stakeholders in these cases. These questions help identify potential business outcomes that MDM could support. Figure 1 provides a worksheet to build this initial picture and facilitate stakeholder discussions. The worksheet maps high-level goals onto a run-grow-transform framework, which could also be represented by three columns for the primary business value drivers: risk, revenue, and cost.


4 ways to get your business ready for the agentic AI revolution

Agents could be used eventually, but only once a partnership approach identifies the right opportunities. "Agents are becoming a big part of how generative AI and machine learning are used in business today. The way agents will be used in travel will be fascinating to watch. I think this technology will certainly be a part of the mix," he said. "The process for Hyatt will be to find the right technologies -- and we'll do that in close partnership with our business leaders and the technology teams that run the applications. We'll then provide the AI services to drive those transitions for the business." ... Keith Woolley, chief digital and information officer at the University of Bristol, is another digital leader who sees the potential benefits of agents. However, he said these advantages will become manifest over the longer term. "We are looking at agentic AI, but we're not implementing it yet," he said. "We sit as a management team and ask questions like, 'Should we do our admissions process using agentic AI? What would be the advantage?'" Woolley told ZDNET he could envision a situation in which AI and automation help assess and inform candidates worldwide about the status of their applications.


Cloud Giants Collaborate on New Kubernetes Resource Management Tool

The core innovation of kro is the introduction of the ResourceGraphDefinition custom resource. kro encapsulates a Kubernetes deployment and its dependencies into a single API, enabling custom end-user interfaces that expose only the parameters applicable to a non-platform engineer. This masking hides the complexity of API endpoints for Kubernetes and cloud providers that are not useful in a deployment context. ... Kro works seamlessly with the existing cloud provider Kubernetes extensions that are available to manage cloud resources from Kubernetes. These are AWS Controllers for Kubernetes (ACK), Google's Config Connector (KCC), and Azure Service Operator (ASO). kro enables standardised, reusable service templates that promote consistency across different projects and environments, with the benefit of being entirely Kubernetes-native. It is still in the early stages of development. "As an early-stage project, kro is not yet ready for production use, but we still encourage you to test it out in your own Kubernetes development environments," the post states. ... Most significantly for the Crossplane community, Farcic questioned kro's purpose given its functional overlap with existing tools. "kro is serving more or less the same function as other tools created a while ago without any compelling improvement," he observed. 


Why a different approach to AIOps is needed for SD-WAN

AIOps tools enhance efficiency by seamlessly integrating with IT management tools, enabling proactive issue identification and streamlining IT management processes. But more than that, they optimize an organization’s network by improving the performance, efficiency, and dependability of its network resources to ensure optimal user experience. Regarding infrastructure, many organizations now rely on SD-WAN – software-defined wide area network – to manage and optimize data traffic across different types of networks efficiently. SD-WAN is an effective way to connect the organization and provide users with application access. It helps businesses improve their network performance, cut costs, and be more flexible by easily connecting to various network types. ... AIOps tools use the information extracted from SD-WAN systems and autonomously resolve issues without human intervention. In other words, AIOps tools utilize predictive analytics to forecast future events or outcomes related to network operations. This makes the whole system run smoother and more reliably, while machine learning algorithms can use this historical data to make predictions and proactively improve the performance of critical applications.


AI-Driven Threat Detection and the Need for Precision

AI algorithms, particularly those based on machine learning, excel at sifting through massive datasets and identifying patterns that would be nearly impossible for us mere humans to spot. An AI system might analyze network traffic patterns to identify unusual data flows that could indicate a data exfiltration attempt. Alternatively, it could scan email attachments for malicious code that traditional antivirus software might miss. Ultimately, AI feeds on context and content. The effectiveness of these systems in protecting your security posture(link is external) is inextricably linked to the quality of the data they are trained on and the precision of their algorithms. ... Finally, AI-driven threat detection may not eradicate human expertise. Skilled security professionals should still oversee AI systems and make informed decisions based on their own contextual expertise and experience. Human oversight validates the AI's findings, and threat detection algorithms may not be able to totally replace the critical thinking and intuition of human analysts. There may come a time when human professionals exist in AI's shadow. Yet, at this time, combining the power of AI with human knowledge and a commitment to continuous learning can form the building blocks for a sophisticated defense program. 


From Ambiguity to Accountability: Analyzing Recommender System Audits under the DSA

In these early years of the DSA, a range of stakeholders – online platforms, civil society, the European Commission (EC), and national Digital Service Coordinators (DSCs) – must experiment, identify good practices, and share lessons learned. Such iteration is important to ensure an adaptive DSA regime that spurs innovation and responds to shifting technologies, risks, and mitigation strategies. The need for iteration and flexibility, however, should not mean the audits fail to deliver on their potential as vehicles for transparency and accountability. The first round of independent audits of recommender systems reveals clear areas for immediate improvement. Because the core definitions and methodologies were developed independently by platforms and auditors, significant inconsistencies exist in both risk assessment and audit processes. ... The DSA requires the main parameters of recommender systems to be spelled out in plain and intelligible language. What does this concretely mean in the recommender system context? Is it free of “acronyms or complex/technical terminology” (Pinterest), “straightforward vocabulary and easy to perceive, understand, or interpret” (Snap), or “written for a general audience with varying technical skill levels, inclusive of all users” (TikTok)? There's a subtle difference in expectations associated with each framing. These terms don’t need to be defined in a vacuum.


Cybersecurity in retail: What does the future hold?

In the coming year, cybersecurity experts predict attackers will increasingly target Generative AI models used by retailers, creating significant potential for operational disruptions and data breaches. These AI systems, now critical to retail operations, are vulnerable to sophisticated attacks that could compromise customer service efficiency and expose critical business vulnerabilities. The core risk lies in the sophisticated ways attackers can exploit AI’s complex decision-making processes, turning what was once a technological advantage into a potential security liability. Retailers must recognise that their AI systems are not just technological tools, but potential entry points for cybercriminal activities. ... The complexity and distribution of digital ecosystems make them prime targets during high-demand periods. For example, as we have seen in the past, cyberattacks that hit supply chains can cause major delays and financial loss. These incidents underscore the vulnerabilities in supply chains during peak times of the year​. In 2025, expect a rise in supply chain attacks during the holiday season, targeting ecommerce platforms and logistics providers, which could disrupt product availability and shipping.

Daily Tech Digest - March 01, 2025


Quote for the day:

"Your life does not get better by chance, it gets better by change." -- Jim Rohn


Two AI developer strategies: Hire engineers or let AI do the work

Philip Walsh, director analyst in Gartner’s software engineering practice, said that from his vantage point he sees “two contrasting signals: some leaders, like Marc Benioff at Salesforce, suggest they may not need as many engineers due to AI’s impact, while others — Alibaba being a prime example — are actively scaling their technical teams and specifically hiring for AI-oriented roles.” In practice, he said, Gartner believes AI is far more likely to expand the need for software engineering talent. “AI adoption in software development is early and uneven,” he said, “and most large enterprises are still early in deploying AI for software development — especially beyond pilots or small-scale trials.” Walsh noted that, while there is a lot of interest in AI-based coding assistants (Gartner sees roughly 80% of large enterprises piloting or deploying them), actual active usage among developers is often much lower. “Many organizations report usage rates of 30% or less among those who have access to these tools,” he said, adding that the most common tools are not yet generating sufficient productivity gains to generate cost savings or headcount reductions. He said, “current solutions often require strong human supervision to avoid errors or endless loops. Even as these technologies mature over the next two to three years, human expertise will remain critical.”


The Great AI shift: The rise of ‘services as software’

Today, AI is pushing the envelope by turning services built to be used by humans as ‘self-serve’ utilities into automatically-running software solutions that execute autonomously—a paradigm shift the venture capital world, in particular, has termed ‘Services as Software’ ... The shift is already conspicuous across industries. AI tools like Harvey AI are transforming the legal and compliance sector by analysing case law and generating legal briefs, essentially replacing human research assistants. The customer support ecosystem that once required large human teams in call centres now handles significant query volumes daily with AI chatbots and virtual agents. ... The AI-driven shift brings into question the traditional notion of availing an ‘expert service’. Software development,legal, and financial services are all coveted industries where workers are considered ‘experts’ delivering specialised services. The human role will undergo tremendous redefinition and will require calibrated re-skilling. ... Businesses won't simply replace SaaS with AI-powered tools; they will build the company's processes and systems around these new systems. Instead of hiring marketing agencies, companies will use AI to generate dynamic marketing and advertising campaigns. Businesses will rely on AI-driven quality assurance and control instead of outsourcing software testing, Quality Assurance, and Quality Control.


Resilience, Observability and Unintended Consequences of Automation

Instead of thinking of replacing work that humans might make or do, it's augmenting that work. And how do we make it easier for us to do these kinds of jobs? And that might be writing code, that might be deploying it, that might be tackling incidents when they come up, but understanding what the fancy, nerdy academic jargon for this is joint cognitive systems. But thinking instead of replacement or our functional allocation, another good nerdy academic term, we'll give you this piece, we'll give the humans those pieces. How do we have a joint system where that automation is really supporting the work of the humans in this complex system? And in particular, how do you allow them to troubleshoot that, to introspect that, to actually understand and to have even maybe the very nerdy versions of this research lay out possible ways of thinking about what can these computers do to help us? ... We could go monolith to microservices, we could go pick your digital transformation. How long did that take you? And how much care did you put into that? Maybe some of it was too long or too bureaucratic or what have you, but I would argue that we tend to YOLO internal developer technology way faster and way looser than we do with the things that actually make us money as that is the perception, the things that actually make us money.


The Modern CDN Means Complex Decisions for Developers

“Developers should not have to be experts on how to scale an application; that should just be automatic. But equally, they should not have to be experts on where to serve an application to stay compliant with all these different patchworks of requirements; that should be more or less automatic,” Engates argues. “You should be able to flip a few switches and say ‘I need to be XYZ compliant in these countries,’ and the policy should then flow across that network and orchestrate where traffic is encrypted and where it’s served and where it’s delivered and what constraints are around it.” ... Along with the physical constraint of the speed of light and the rise of data protection and compliance regimes, Alexander also highlights the challenge of costs as something developers want modern CDNs to help them with. “Egress fees between clouds are one of the artificial barriers put in place,” he claims. That can be 10%, 20% or even 30% of overall cloud spend. “People can’t build the application that they want, they can’t optimize, because of some of these taxes that are added on moving data around.” Update patterns aren’t always straightforward either. Take a wiki like Fandom, where Fastly founder and CTO Artur Bergman was previously CTO. 


A Comprehensive Look at OSINT

Cybersecurity professionals within corporations rely on public data to identify emerging phishing campaigns, data breaches, or malicious activity targeting their brand. Investigative journalists and academic researchers turn to OSINT for fact-checking, identifying new leads, and gathering reliable support for their reporting or studies. ... Avoiding OSINT or downplaying its value can leave organizations unaware of threats and opportunities that are readily discoverable to others. By failing to gather open-source data, businesses and government agencies could remain in the dark about malicious activities, negative brand impersonations, or stolen credentials circulating on forums and dark web marketplaces. In the event of a security breach or public scandal, stakeholders may view the lack of proper OSINT measures as a failure of due diligence, eroding trust and tarnishing the organization’s image. ... The primary driver behind OSINT’s growth is the vast reservoir of information generated daily by digital platforms, databases, and news outlets. This public data can be invaluable for enhancing security, improving transparency, and making more informed decisions. Security professionals, for instance, can preemptively identify threats and vulnerabilities posted openly by malicious actors. 


OT/ICS cyber threats escalate as geopolitical conflicts intensify

A persistent lack of visibility into OT environments continues to obscure the full scale of these attacks. These insights come from Dragos’ 2025 OT/ICS Cybersecurity Report, its eighth annual Year in Review, which analyzes industrial organizations’ cyber threats. .., VOLTZITE is arguably the most crucial threat group to track in critical infrastructure. Due to its dedicated focus on OT data, the group is a capable threat to ICS asset owners and operators. This group shares extensive technical overlaps with the Volt Typhoon threat group tracked by other organizations. It utilizes the same techniques as in previous years, setting up complex chains of network infrastructure to target, compromise, and steal compromising OT-relevant data—GIS data, OT network diagrams, OT operating instructions, etc.—from victim ICS organizations. ... Increasing collaboration between hacktivist groups and state-backed cyber actors has led to a hybrid threat model where hacktivists amplify state objectives, either directly or through shared infrastructure and intelligence. State actors increasingly look to exploit hacktivist groups as proxies to conduct deniable cyber operations, allowing for more aggressive attacks with reduced attribution risks.


Leveraging AR & VR for Remote Maintenance in Industrial IoT

AR tools like Microsoft’s HoloLens 2 are enabling workers on-site to receive real-time guidance from experts located anywhere in the world. Using AR glasses or headsets, on-site personnel can share their view with remote technicians, who can then overlay instructions, schematics, or step-by-step troubleshooting guidance directly onto the worker’s field of vision. This allows maintenance teams to resolve issues faster and more accurately, without the need for travel, reducing downtime and operational costs. ... By using VR simulations, workers can familiarize themselves with equipment, troubleshoot issues, and practice responses to emergencies, all in a virtual setting. This hands-on experience builds confidence and competence, ultimately improving safety and efficiency when dealing with real equipment. As IIoT systems become more sophisticated, VR training can play a key role in ensuring that the workforce is well-prepared to handle advanced technologies without risking costly mistakes or accidents. ... In the future, we can expect even more seamless integration between AR/VR systems and IIoT platforms, where real-time data from sensors and machines is directly fed into the AR/VR environment, providing a comprehensive view of machine health, performance and issues. 


Just as DNA defines an organism’s identity, business continuity must be deeply embedded in every aspect of your organization. It is more than just a collection of emergency plans or procedures; it embodies a philosophy that ensures not only survival during disruptions, but long-term sustainability as well. ... An organization without continuity is like a tree without roots—fragile and vulnerable to the slightest shock. Continuity serves as an anchor, allowing organizations to navigate crises while staying aligned with their strategic goals. Any organization that aims to grow and thrive must take a proactive approach to continuity. Continuity strategies and initiatives can be seen as the roots of a tree, natural extensions that provide stability and sustain growth. ... It is essential that both leaders and team members possess the experience and skills needed to execute their work effectively. ... Thoroughly assess your key vulnerabilities. This involves two primary methods: a BIA, which analyzes the impacts of a disturbance over time to determine recovery priorities, resource requirements, and appropriate responses; and risk analysis, which identifies risks tied to prioritized activities and critical resources. Together, these two approaches offer a comprehensive understanding of your organization’s pain points.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

This phenomenon, a “compound physical-cyber threat,” where a cyberattack is intentionally launched around a heatwave or hurricane, for example, would have outsized and potentially devastating effects on businesses, communities, and entire economies, according to a 2024 study led by researchers at Johns Hopkins University. “Cyber-attacks are more disruptive when infrastructure components face stresses beyond normal operating conditions,” the study asserted. Businesses and their IT and risk management people would be wise to take notice, because both cyberattacks and weather-related disasters are increasing in frequency and in the cost they exact from their victims. ... Take what you learn from the risk assessment to develop a detailed plan that outlines the steps your organization intends to take to preserve cybersecurity, business continuity, and network connectivity during a crisis. Whether you’re a B2B or B2C organization, your customers, employees, suppliers and other stakeholders expect your business to be “always on,” 24/7/365. How will you keep the lights on, the lines of communications open, and your network insulated from cyberattack during a disaster? 


‘It Won’t Happen to Us:’ The Dangerous Mindset Minimizing Crisis Preparation

The main mistakes in crisis situations include companies staying silent and not releasing official statements from management, creating a vacuum of information and promoting the spread of rumors. ... First and foremost, companies should not underestimate the importance of communication, especially when things are not going well. During a crisis, many companies prefer to sit quietly and wait without informing or sharing anything about their measures and actions in connection with the crisis. This is the wrong approach. Silence gives competitors enough space to thrive and gain a market advantage. Meanwhile, journalists won’t stop working on hot stories. When you don’t share anything meaningful with them or your audience, they may collect and publish rumors and misinformation about your company. And the lack of comments creates the ground for negative interpretations. Therefore, transparency and efficiency are key principles of anti-crisis communication. If you are clear in your messages and give quick responses, it allows the company to control the information agenda. The surefire way to gain and maintain trust is to promptly and regularly inform your company’s investors during a crisis through your own channels.