Showing posts with label health care. Show all posts
Showing posts with label health care. Show all posts

Daily Tech Digest - May 03, 2026


Quote for the day:

“Many of life’s failures are people who did not realize how close they were to success when they gave up.” -- Thomas A. Edison

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 15 mins • Perfect for listening on the go.


The DSPM promise vs the enterprise reality

In "The DSPM Promise vs. the Enterprise Reality," Ashish Mishra explores the friction between the theoretical benefits of Data Security Posture Management (DSPM) and the practical challenges of enterprise implementation. As global data volumes skyrocket and sensitive information fragments across multi-cloud environments, DSPM tools have emerged as a critical solution for visibility. However, Mishra argues that the technology often exposes deeper organizational issues. While scanners effectively identify "shadow data" in unmonitored storage, they cannot solve the "political problem" of data ownership; security teams frequently struggle to find stakeholders accountable for remediation. Furthermore, the reliance on machine learning for data classification can lead to false positives that erode analyst trust, while the sheer volume of alerts threatens to overwhelm understaffed security operations centers. To avoid DSPM becoming "shelfware," executives must treat its adoption as a comprehensive governance program rather than a simple software installation. This requires dedicated engineering resources to maintain complex integrations, a robust internal classification framework, and a clear alignment between security findings and business-unit accountability. Ultimately, the article concludes that the organizations most successful with DSPM are those that anticipate implementation friction and prioritize human governance alongside automated discovery to transform raw awareness into genuine security posture improvements.


How CTO as a Service Reduces Technology Risk in Growing Companies

In the article "How CTO as a Service Reduces Technology Risk in Growing Companies," SDH Global examines how fractional leadership helps organizations navigate the technical complexities inherent in scaling operations. Growing businesses often face critical hazards, such as selecting inappropriate technology stacks, accumulating significant technical debt, and failing to align infrastructure with long-term business objectives. CTO as a Service (CaaS) effectively mitigates these risks by providing high-level strategic guidance and architectural oversight without the substantial financial commitment of a full-time executive hire. The service focuses on several core pillars: strategic roadmap development, early identification of security vulnerabilities, and the design of scalable system architectures that can adapt to increasing demand. By standardizing coding practices and development workflows, CaaS providers bring consistency to engineering teams and reduce operational chaos. Furthermore, these experts manage vendor relationships and optimize cloud expenditures to prevent over-engineering and financial waste. This flexible engagement model allows startups and mid-sized enterprises to access immediate senior-level expertise, ensuring their technology remains a robust asset rather than a liability. Ultimately, CaaS provides the necessary balance between rapid innovation and disciplined risk management, fostering sustainable growth through evidence-based decision-making and comprehensive technical audits.


The Great Digital Perimeter: Navigating the Challenges of Global Age Verification

The article explores how global age verification has transformed from a simple checkbox into one of the most complex challenges shaping today’s digital ecosystem. As governments worldwide tighten online safety laws, platforms across social media, gaming, entertainment, e‑commerce, and fintech are being pushed to adopt far more rigorous methods to prevent minors from accessing harmful or age‑restricted content. This shift has created a new kind of digital perimeter—not one that protects networks or data, but one that separates children from the adult internet. The piece highlights how regulatory approaches vary dramatically across regions: the UK’s Online Safety Act enforces “highly effective” age assurance with strict penalties; the EU is rolling out privacy‑preserving verification via digital identity wallets; the US remains fragmented with aggressive state laws like Utah’s SB 73; and countries like Australia and India are emerging as influential leaders with proactive, tech‑driven frameworks. The article also traces the evolution of age‑verification technology—from self‑declaration to document checks, AI‑based age estimation, and now cryptographic proofs that minimize data exposure. Despite technological progress, organizations still face major hurdles, including privacy concerns, AI bias, user friction, high implementation costs, and widespread circumvention through VPNs. Ultimately, the article argues that age verification has become foundational digital infrastructure, demanding solutions that balance safety, privacy, and user trust in an increasingly regulated online world.


CRUD Is Dead (Sort Of): How SaaS Will Evolve Into Semi-Autonomous Systems

The article argues that traditional SaaS applications built on the long‑standing CRUD model—Create, Read, Update, Delete—are becoming obsolete as software shifts from passive systems of record to semi‑autonomous systems of action. While today’s tools like Ramp, Jira, Notion, and HubSpot still rely on users manually creating and updating records, the emerging paradigm introduces agentic software that perceives context, reasons about it, and initiates actions on behalf of users. The transition begins with embedded copilots that summarize threads, draft messages, flag anomalies, or clean backlogs, all by orchestrating LLMs through existing APIs. As SaaS products become more machine‑readable—with clean APIs, action schemas, and feedback loops—agents will eventually coordinate across applications, enabling event‑driven workflows where systems synchronize autonomously. This evolution requires new architectures such as pub/sub messaging, shared memory layers, and granular permissions. Ultimately, SaaS will progress toward fully autonomous systems that manage budgets, assign work, run outreach, or adjust timelines without constant human approval. User interfaces will shift from being the primary workspace to becoming explanation layers that show what the system did and why. The article concludes that CRUD will remain as plumbing, but the companies that embrace autonomy—thinking in verbs rather than nouns—will define the next generation of SaaS.


Anyone Can Build. Almost No One Can Maintain: The Real Cost of AI Coding

The article argues that while AI tools now enable almost anyone to build functional software with a few prompts, the real challenge—and cost—lies in maintaining what gets built. The author describes how early “vibe coding” with tools like Claude Code creates a false sense of mastery: AI can rapidly generate working prototypes, but without engineering fundamentals, these systems quickly collapse under the weight of bugs, architectural flaws, and uncontrolled complexity. As projects grow, users without a technical foundation struggle to diagnose issues, articulate precise tasks, or understand the consequences of changes, leading to spiraling token costs, fragile codebases, and invisible errors that surface only in production. The article emphasizes that AI does not replace engineering judgment; instead, it amplifies the gap between those who understand systems and those who don’t. Sustainable AI‑assisted development requires clear specifications, architectural thinking, test coverage, rule‑based workflows, and structured “skills” that guide AI actions. The author warns of a new risk: dependency, where developers rely so heavily on AI that they lose the ability to reason about their own systems. Ultimately, the piece argues that expertise has not become obsolete—it has become more valuable, because AI accelerates both good and bad decisions. Those who invest in foundations will build systems; those who don’t will build chaos.


Agents, Architecture, & Amnesia: Becoming AI-Native Without Losing Our Minds

The presentation explores how the rapid rise of AI agents is pushing organizations toward higher levels of autonomy while simultaneously exposing them to new forms of architectural risk. Using The Sorcerer’s Apprentice as a metaphor, Tracy Bannon warns that ungoverned automation can multiply problems faster than teams can contain them. She outlines an AI autonomy continuum, moving from simple assistants to multi‑agent orchestration and ultimately toward “software flywheels” capable of self‑diagnosis and self‑modification. As autonomy increases, so do the demands for observability, governance, verification, and architectural discipline. Bannon argues that many teams are suffering from “architectural amnesia”—forgetting hard‑won engineering fundamentals due to reckless speed, tool‑led thinking, cognitive overload, and decision compression. This amnesia accelerates the accumulation of technical, operational, and security debt at machine speed, as illustrated by real incidents where autonomous agents acted beyond intended boundaries. To counter this, she proposes Minimum Viable Governance, anchored in identity, delegation, traceability, and explicit architectural decision records. She emphasizes that AI‑native delivery is not magic but engineering, requiring intentional tradeoffs, human‑machine calibrated trust, and treating agents like first‑class actors with identities and permissions. Ultimately, she calls for teams to build cognitively diverse, disciplined architectural practices to harness autonomy without losing control.


Cyber-Ready Boards: A Guide to Effective Cybersecurity Briefings for Directors

The article emphasizes that cybersecurity has become one of the most significant and fast‑evolving risks facing public companies, with intrusions capable of disrupting operations, generating substantial remediation costs, triggering litigation, and attracting regulatory scrutiny. Boards are reminded that material cyber incidents often require rapid public disclosure—such as Form 8‑K filings within four business days—and that annual reports must describe how directors oversee cybersecurity risks. Because inadequate oversight can negatively affect investor perception and ISS QualityScore evaluations, boards must remain consistently informed about the company’s threat landscape, risk profile, and changes since prior briefings. The guidance outlines key elements of effective board‑level cybersecurity updates, including assessments of industry‑specific threats, AI‑driven risks such as deepfakes and data leakage into public LLMs, and the broader legal and regulatory environment governing breaches, enforcement, and disclosure obligations. Boards should also receive clear visibility into the company’s cybersecurity program—its governance structure, resource adequacy, alignment with frameworks like NIST, third‑party dependencies, insurance coverage, and ongoing initiatives. Regular updates on training, tabletop exercises, audits, and areas requiring board approval further strengthen oversight. The article concludes that well‑structured, recurring briefings and private CISO sessions help build trust, enhance preparedness, and ensure directors can fulfill their responsibilities while protecting organizational resilience and shareholder value.


Managing OT risk at scale: Why OT cyber decisions are leadership decisions

The article argues that managing OT (operational technology) cyber risk at scale is fundamentally a leadership and governance challenge, not just a technical one, because OT environments operate under constraints that differ sharply from IT—long equipment lifecycles, limited patching windows, incomplete asset visibility, embedded vendor access, and distributed operational ownership. These conditions mean that cyber incidents in OT directly affect physical processes, industrial assets, and critical services, making consequences far broader than data loss or compliance failures. The author highlights a significant accountability gap: only a small fraction of organizations report OT security issues to their boards or maintain dedicated OT security teams, and in many cases the CISO is not responsible for OT security. At scale, inconsistent maturity across sites, fragmented ownership, and vendor dependencies turn local weaknesses into enterprise‑level exposure. As a result, incident outcomes hinge on pre‑agreed leadership decisions—such as whether to isolate or continue operating during an attack, centralize or federate authority, restore quickly or verify integrity first, and restrict or maintain vendor access. Boards are urged to clarify operating models, identify high‑impact OT scenarios, demand independent assurance, and treat AI and cloud adoption as governance issues rather than technology upgrades. Ultimately, resilience in OT is built through clear decision rights, scenario planning, and governance structures established before a crisis occurs.


MITRE flags rising cyber risks as medical devices adopt AI, cloud and post-quantum technologies

MITRE’s new analysis warns that the rapid adoption of AI/ML, cloud services, and post‑quantum cryptography is fundamentally reshaping the cybersecurity risk landscape for medical devices, creating attack surfaces that traditional controls cannot adequately address. As devices move beyond tightly managed clinical environments into homes and patient‑managed settings, oversight becomes fragmented and risk ownership increasingly distributed across manufacturers, healthcare delivery organizations, cloud providers, and third‑party operators. Medical devices—from implantables and infusion pumps to large imaging systems—often run on constrained hardware or legacy software, limiting the security controls they can support while simultaneously becoming more interconnected with health IT systems. Cloud adoption introduces systemic vulnerabilities, shifting control away from manufacturers and enabling single points of failure that can disrupt care at scale, as seen in the Elekta ransomware incident affecting more than 170 facilities. AI/ML integration adds lifecycle‑wide risks, including data poisoning, adversarial inputs, unpredictable model behavior, and vulnerabilities introduced by AI‑generated code. Meanwhile, the transition to post‑quantum cryptography brings challenges around performance overhead, interoperability with legacy systems, and long device lifecycles—especially for implantables. MITRE concludes that safeguarding next‑generation medical devices requires evolving existing practices: embedding threat modeling, SBOM‑driven vulnerability management, secure cloud and DevSecOps processes, clear contractual roles, and governance frameworks that support continuous updates and resilient architectures as technologies and care environments keep shifting.


How To Mitigate The Risks Of Rapid Growth

In the article "How to Mitigate the Risks of Rapid Growth," the author examines the double-edged sword of business expansion, where the zeal to scale quickly can lead to structural failure if not balanced with fiscal discipline. A primary risk highlighted is "breaking" under the stress of acceleration, which often occurs when companies over-invest in growth at the expense of near-term profitability or defensible margins. To mitigate these dangers, the article emphasizes the importance of maintaining strong unit economics and carefully monitoring the cost of client acquisition and expansion. Effective leadership teams must minimize execution, macro, and compliance risks by prioritizing long-term value over immediate earnings, typically looking at a four-to-five-year horizon. Operational stability is further bolstered by ensuring team bandwidth is scalable and by avoiding heavy reliance on debt, which preserves the cash buffers necessary to weather economic shifts. Furthermore, the piece underscores the necessity of robust post-sale processes to prevent revenue leakage and audit exposure. By integrating emerging technologies like AI for proactive care and keeping the customer at the center of all strategic decisions, CFOs can ensure that their organizations remain resilient. Ultimately, successful growth requires a proactive management approach that continuously optimizes capital structure while aligning organizational purpose with aggressive but sustainable financial goals.

Daily Tech Digest - November 13, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



Does your chatbot have 'brain rot'? 4 ways to tell

Oxford University Press, publisher of the Oxford English Dictionary, named "brain rot" as its 2024 Word of the Year, defining it as "the supposed deterioration of a person's mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging." ... Trying to draw exact connections between human cognition and AI is always tricky, despite the fact that neural networks -- the digital architecture upon which modern AI chatbots are based -- were modeled upon networks of organic neurons in the brain. ... That said, there are some clear parallels: as the researchers note in the new paper, for example, models are prone to "overfitting" data and getting caught in attentional biases ... If the ideal AI chatbot is designed to be a completely objective and morally upstanding professional assistant, these junk-poisoned models were like hateful teenagers living in a dark basement who had drunk way too much Red Bull and watched way too many conspiracy theory videos on YouTube. Obviously, not the kind of technology we want to proliferate. ... Obviously, most of us don't have a say in what kind of data gets used to train the models that are becoming increasingly unavoidable in our day-to-day lives. AI developers themselves are notoriously tight-lipped about where they source their training data from, which means it's difficult to rank consumer-facing models


7 behaviors of the AI-Savvy CIO

"The single most critical takeaway for CIOs is that a strong data foundation isn't optional -- it's critical for AI success. AI has made it easy to build prototypes, but unless you have your data in a single place, up to date, secured, and well governed, you'll struggle to put those prototypes into production. The team laying the groundwork for that foundation and getting enterprises' data AI-ready is data engineering. CIOs who still see data engineering as a back-office function are already five years behind, and probably training their future competitors. ... "Your data will never be perfect. And it doesn't have to be. It needs to be indicative of your company's reality. But your data will get a lot better if you first use AI to improve the UX. Then people will use your systems more, and in the way intended, creating better data. That better data will enable better AI. And the virtuous cycle will have begun. But it starts with the human side of the equation, not the technological." ... CIOs don't need deep technical mastery such as coding in Python or tuning neural networks -- but they must understand AI fundamentals. This includes grasping core AI principles, machine learning concepts, statistical modeling, and ethical implications. Mastery starts with CIOs understanding AI as an umbrella of technologies that automate different things. With this foundational fluency, they can ask the right questions, interpret insights effectively, and make informed strategic decisions. Let's look at the three AI domains.


The economics of the software development business

Some software companies quietly tolerated piracy, figuring that the more their software spread—even illegally—the more legitimate sales would follow in the long run. The argument was that if students and hobbyists pirated the software, it would lead to business sales when those people entered the workforce. The catchphrase here was “piracy is cheaper than marketing.” This was never an official position, but piracy was often quietly tolerated. ... Over the years, the boxes got thinner and the documentation went onto the Internet. For a time, though, “online help” meant a *.hlp file on your hard drive. People fought hard to keep that type of online help well into the Internet age. “What if I’m on an airplane? What if I get stranded in northern British Columbia?” Eventually, the physical delivery of software went away as Internet bandwidth allowed for bigger and bigger downloads. ... SaaS too has interesting economic implications for software companies. The marketplace generally expects a free tier for a SaaS product, and delivering free services can become costly if not done carefully. The compute costs money, after all. An additional problem is making sure that your free tier is good enough to be useful, but not so good that no one wants to move up to the paid tiers. ... The economics of software have always been a bit peculiar because the product is maddeningly costly to design and produce, yet incredibly easy to replicate and distribute. The years go by, but the problem remains the same: how to turn ideas and code into a profitable business?


Beyond the checklist: Shifting from compliance frameworks to real-time risk assessments

One of the most overlooked aspects of risk assessments is cadence. While gap analyses are sometimes done yearly or to prepare for large-scale audits, risk assessments need to be continuous or performed on a regular schedule. Threats do not respect calendar cycles. Major changes, including new technologies, mergers, regulatory changes or implementing AI, need to trigger reassessments. ... Risk assessments should culminate in outputs that business leaders can act on. This includes a concise risk heat map, a prioritized remediation roadmap and clear asks, such as budget, ownership and timelines. These deliverables convert technical findings into strategic decisions. They also help build trust with stakeholders, especially in organizations that may be new to formal risk management. ... Targeted risk assessments can be viewed as a low-cost, fundamental option. They are best suited to companies that have limited budget or are not prepared for a full review of the framework. With reduced scope, shorter turnaround and transparent business value, such assessments enable rapid establishment of trust, delivering prioritized outcomes. ... Risk assessments are not just checkboxes. They are tools for making decisions. The best programs are aligned with the business, focused, consistent and made to change over time.


Legitimate Interest as a Lawful Basis: Pros, Cons and the Indian DPDP Act’s Stance

Under the EU’s General Data Protection Regulation (GDPR), for example, a company can process data if it is “necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (Article 6(1)(f) GDPR), so long as those interests are not overridden by the individual’s rights. However, India’s new Digital Personal Data Protection Act, 2023 (DPDP Act) pointedly does not include legitimate interest as a standalone lawful ground for processing. Instead, the Indian law relies primarily on consent and a limited set of “legitimate uses” explicitly enumerated in the statute. This divergence raises important questions about the pros and cons of the legitimate interest basis, its impact on the free flow of data, and whether India might benefit from adopting a similar concept. ... India’s decision to omit a general legitimate interest clause has sparked debate. There are advantages and disadvantages to this approach, and its impact on data flows and innovation is a key consideration. Pros / Rationale for Omission: From a privacy rights perspective, the absence of an open-ended legitimate interest basis means stronger individual control and legal certainty. The law explicitly tells citizens and businesses what the non-consensual exceptions are mostly common-sense or public interest scenarios and everything else by default requires consent.


CIOs: Collect the right data now for future AI-enabled services

In its Technology Trends Outlook for 2025 report, McKinsey suggests the technology landscape continues to undergo significant innovation-sponsored shifts. The consultant says success will depend on executives identifying high-impact areas where their businesses can use AI, while addressing external factors such as regulatory shifts and ecosystem readiness. CIOs, as the guardians of enterprise technology, will be expected to embrace this challenge, but how? For Steve Lucas, CEO at technology specialist Boomi, digital leaders must start with a recognition that the surfeit of data held in modern enterprises is simply a starting point for what comes next. “There’s plenty of data,” he says. “In fact, there’s too much of it. We worry about collecting, storing, and accessing data. I think a successful approach is about determining the data that matters. As a CIO, do you understand what data matters today and what emerging technologies will matter tomorrow?” ... While it can be tough to find the wood for the trees, Corbridge suggests CIOs should search for established data roots within the enterprise. “It’s about going back to the huge volumes of data you’ve got already and working out how you put that information in the right place so it can be used in the right way for your AI projects,” he says. Focusing on the fine details is an approach that chimes with Ian Ruffle, head of data and insight at UK breakdown specialist RAC. 


How TTP-based Defenses Outperform Traditional IoC Hunting

To fight modern ransomware, organizations must shift from chasing IoCs to detecting attacker behaviors — known as Tactics, Techniques, and Procedures (TTPs). The MITRE ATT&CK framework provides a detailed overview of these behaviors throughout the attack lifecycle, from initial access to impact. TTPs are challenging for attackers to modify because they represent core behavioral patterns and strategic approaches, unlike IoCs which are surface-level elements that can be easily altered. This shift is reinforced by the so-called ‘Pyramid of Pain’ – a conceptual model that ranks indicators by how difficult they are for adversaries to alter. At the base are easily changed elements like hash values and IP addresses. At the top are TTPs, which represent the attacker’s core behaviors and strategies. Disrupting TTPs forces adversaries to change their entire strategy, which makes behavior-based detection the most effective and resource-consuming method for them to avoid. ... When security and networking are natively integrated, policy enforcement is consistent, micro-segmentation is practical, and containment actions can be executed inline without stitching together multiple consoles. The cloud model also enables continuous, global updates to prevention logic and the ability to apply AI/ML on aggregated, high‑fidelity data feeds to reduce noise and improve detection quality. All this reminds me of the OODA military model that can help speed up incident response.


Healthcare security is broken because its systems can’t talk to each other

To maintain effectiveness, healthcare organizations should continually evaluate their security toolset for relevance, integration potential, and overall value to the security program. Prioritizing solutions that support open standards, and seamless integration helps minimize context switching and alert fatigue, while ensuring that the security team remains engaged and productive. Ultimately, the decision to balance specialized point solutions with broader integrated platforms must be guided by strategic priorities, resource capacity, and the need to support both operational efficiency and clinical excellence. ... A critical consideration is the interoperability of security tools across both cloud and on-premises environments. Healthcare organizations must assess if their security solutions need to span multiple cloud providers, support on-premises systems, or both, and determine how long on-premises support will be necessary as applications gradually shift to the cloud. Cloud providers are increasingly acquiring and integrating advanced security technologies, offering unified solutions that reduce the need for multiple monitoring platforms. This consolidation not only lessens alert fatigue but also enhances real-time visibility to security threats, an important advantage for healthcare, where timely detection is vital for protecting patient data and ensuring clinical continuity.


The Hard Truth About Trying to AI-ify the Enterprise

Every company has a few proofs of concept running. The problem isn't experimentation, it's scalability. How do you take those POCs and embed them into your business fabric? Many enterprises get trapped in "PowerPoint transformation": ambitious visions that never cross into operational workflows. "I've seen organizations invest millions in analytics platforms and data lakes. But when you ask how that's translating into faster underwriting, better risk models or smarter supply chains, there's often silence," Sen said. "That's because AI doesn't fail at the technology layer - it fails at the architecture of adoption." ... The central challenge, Sen argues, is that most enterprises treat AI as an overlay rather than an integral part of their operational core. "You can't bolt it on top of outdated systems and expect it to transform decision-making. The technology stack, data flow and governance model all need to evolve together," he said. That evolution is what Gartner describes as the pivot from "defending AI pilots to expanding into agentic AI ROI." Organizations that mastered generative AI in 2024 are now moving toward autonomous, interconnected systems that can execute tasks without human micromanagement. Sen points to his own experience at Linde plc as an early example. His team's first gen AI deployment at Linde was built for the audit department. "Our internal audit head wanted a semantic search database and a generative model to cut audit report generation time by half," he said.


Designing Edge AI That Works Where the Mission Happens

The fastest way to make federal AI deployments more reliable is to build on existing systems rather than start from scratch. Every mission already generates valuable data – drone imagery, satellite feeds, radar signals, logistics updates — that tells part of the operational story. AI at the edge helps teams interpret that data faster and more accurately. Instead of rebuilding infrastructure, agencies can embed lightweight, mission-specific AI models directly into their existing systems. ... When AI leaves the data center, its security model must accompany it. Systems operating at the edge face distinct risks, including limited oversight, contested networks, and the potential for physical compromise. Protection has to be built into every layer of the system, from the silicon up, to ensure full-scale security. That starts with end-to-end encryption, protecting data at rest, in transit, and during inference. Hardware-based features, such as secure boot and Trusted Execution Environments, prevent unauthorized code from running, while confidential computing keeps information encrypted even as it’s being processed. ... A decade ago, deploying AI in remote or contested locations required racks of hardware and constant connectivity. Today, a laptop or a single sensor array can deliver that same intelligence locally, securely, and autonomously. The power of AI and edge computing isn’t measured in size or speed; it’s in relevance.

Daily Tech Digest - August 22, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


Leveraging DevOps to accelerate the delivery of intelligent and autonomous care solutions

Fast iteration and continuous delivery have become standard in industries like e-commerce and finance. Healthcare operates under different rules. Here, the consequences of technical missteps can directly affect care outcomes or compromise sensitive patient information. Even a small configuration error can delay a diagnosis or impact patient safety. That reality shifts how DevOps is applied. The focus is on building systems that behave consistently, meet compliance standards automatically, and support reliable care delivery at every step. ... In many healthcare environments, developers are held back by slow setup processes and multi-step approvals that make it harder to contribute code efficiently or with confidence. This often leads to slower cycles and fragmented focus. Modern DevOps platforms help by introducing prebuilt, compliant workflow templates, secure self-service provisioning for environments, and real-time, AI-supported code review tools. In one case, development teams streamlined dozens of custom scripts into a reusable pipeline that provisioned compliant environments automatically. The result was a noticeable reduction in setup time and greater consistency across projects. Building on this foundation, DevOps also play a vital role in development and deployment of the Machine Learning Models. 


Tackling the DevSecOps Gap in Software Understanding

The big idea in DevSecOps has always been this: shift security left, embed it early and often, and make it everyone’s responsibility. This makes DevSecOps the perfect context for addressing the software understanding gap. Why? Because the best time to capture visibility into your software’s inner workings isn’t after it’s shipped—it’s while it’s being built. ... Software Bill of Materials (SBOMs) are getting a lot of attention—and rightly so. They provide a machine-readable inventory of every component in a piece of software, down to the library level. SBOMs are a baseline requirement for software visibility, but they’re not the whole story. What we need is end-to-end traceability—from code to artifact to runtime. That includes:Component provenance: Where did this library come from, and who maintains it? Build pipelines: What tools and environments were used to compile the software? Deployment metadata: When and where was this version deployed, and under what conditions? ... Too often, the conversation around software security gets stuck on source code access. But as anyone in DevSecOps knows, access to source code alone doesn’t solve the visibility problem. You need insight into artifacts, pipelines, environment variables, configurations, and more. We’re talking about a whole-of-lifecycle approach—not a repo review.


Navigating the Legal Landscape of Generative AI: Risks for Tech Entrepreneurs

The legal framework governing generative AI is still evolving. As the technology continues to advance, the legal requirements will also change. Although the law is still playing catch-up with the technology, several jurisdictions have already implemented regulations specifically targeting AI, and others are considering similar laws. Businesses should stay informed about emerging regulations and adapt their practices accordingly. ... Several jurisdictions have already enacted laws that specifically govern the development and use of AI, and others are considering such legislation. These laws impose additional obligations on developers and users of generative AI, including with respect to permitted uses, transparency, impact assessments and prohibiting discrimination. ... In addition to AI-specific laws, traditional data privacy and security laws – including the EU General Data Protection Regulation (GDPR) and U.S. federal and state privacy laws – still govern the use of personal data in connection with generative AI. For example, under GDPR the use of personal data requires a lawful basis, such as consent or legitimate interest. In addition, many other data protection laws require companies to disclose how they use and disclose personal data, secure the data, conduct data protection impact assessments and facilitate individual rights, including the right to have certain data erased. 


Five ways OSINT helps financial institutions to fight money laundering

By drawing from public data sources available online, such as corporate registries and property ownership records, OSINT tools can provide investigators with a map of intricate corporate and criminal networks, helping them unmask UBOs. This means investigators can work more efficiently to uncover connections between people and companies that they otherwise might not have spotted. ... External intelligence can help analysts to monitor developments, so that newer forms of money laundering create fewer compliance headaches for firms. Some of the latest trends include money muling, where criminals harness channels like social media to recruit individuals to launder money through their bank accounts, and trade-based laundering, which allows bad actors to move funds across borders by exploiting international complexity. OSINT helps identify these emerging patterns, enabling earlier intervention and minimizing enforcement risks. ... When it comes to completing suspicious activity reports (SARs), many financial institutions rely on internal data, spending millions on transaction monitoring, for instance. While these investments are unquestionably necessary, external intelligence like OSINT is often neglected – despite it often being key to identifying bad actors and gaining a full picture of financial crime risk. 


The hard problem in data centres isn’t cooling or power – it’s people

Traditional infrastructure jobs no longer have the allure they once did, with Silicon Valley and startups capturing the imagination of young talent. Let’s be honest – it just isn’t seen as ‘sexy’ anymore. But while people dream about coding the next app, they forget someone has to build and maintain the physical networks that power everything. And that ‘someone’ is disappearing fast. Another factor is that the data centre sector hasn’t done a great job of telling its story. We’re seen as opaque, technical and behind closed doors. Most students don’t even know what a data centre is, and until something breaks  it doesn’t even register. That’s got to change. We need to reframe the narrative. Working in data centres isn’t about grey boxes and cabling. It’s about solving real-world problems that affect billions of people around the world, every single second of every day. ... Fixing the skills gap isn’t just about hiring more people. It’s about keeping the knowledge we already have in the industry and finding ways to pass it on. Right now, we’re on the verge of losing decades of expertise. Many of the engineers, designers and project leads who built today’s data centre infrastructure are approaching retirement. While projects operate at a huge scale and could appear exciting to new engineers, we also have inherent challenges that come with relatively new sectors. 


Multi-party computation is trending for digital ID privacy: Partisia explains why

The main idea is achieving fully decentralized data, even biometric information, giving individuals even more privacy. “We take their identity structure and we actually run the matching of the identity inside MPC,” he says. This means that neither Partisia nor the company that runs the structure has the full biometric information. They can match it without ever decrypting it, Bundgaard explains. Partisia says it’s getting close to this goal in its Japan experiment. The company has also been working on a similar goal of linking digital credentials to biometrics with U.S.-based Trust Stamp. But it is also developing other identity-related uses, such as proving age or other information. ... Multiparty computation protocols are closing that gap: Since all data is encrypted, no one learns anything they did not already know. Beyond protecting data, another advantage is that it still allows data analysts to run computations on encrypted data, according to Partisia. There may be another important role for this cryptographic technique when it comes to privacy. Blockchain and multiparty computation could potentially help lessen friction between European privacy standards, such as eIDAS and GDPR, and those of other countries. “I have one standard in Japan and I travel to Europe and there is a different standard,” says Bundgaard. 


MIT report misunderstood: Shadow AI economy booms while headlines cry failure

While headlines trumpet that “95% of generative AI pilots at companies are failing,” the report actually reveals something far more remarkable: the fastest and most successful enterprise technology adoption in corporate history is happening right under executives’ noses. ... The MIT researchers discovered what they call a “shadow AI economy” where workers use personal ChatGPT accounts, Claude subscriptions and other consumer tools to handle significant portions of their jobs. These employees aren’t just experimenting — they’re using AI “multiples times a day every day of their weekly workload,” the study found. ... Far from showing AI failure, the shadow economy reveals massive productivity gains that don’t appear in corporate metrics. Workers have solved integration challenges that stymie official initiatives, proving AI works when implemented correctly. “This shadow economy demonstrates that individuals can successfully cross the GenAI Divide when given access to flexible, responsive tools,” the report explains. Some companies have started paying attention: “Forward-thinking organizations are beginning to bridge this gap by learning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives.” The productivity gains are real and measurable, just hidden from traditional corporate accounting. 


The Price of Intelligence

Indirect prompt injection represents another significant vulnerability in LLMs. This phenomenon occurs when an LLM follows instructions embedded within the data rather than the user’s input. The implications of this vulnerability are far-reaching, potentially compromising data security, privacy, and the integrity of LLM-powered systems. At its core, indirect prompt injection exploits the LLM’s inability to consistently differentiate between content it should process passively (that is, data) and instructions it should follow. While LLMs have some inherent understanding of content boundaries based on their training, they are far from perfect. ... Jailbreaks represent another significant vulnerability in LLMs. This technique involves crafting user-controlled prompts that manipulate an LLM into violating its established guidelines, ethical constraints, or trained alignments. The implications of successful jailbreaks can potentially undermine the safety, reliability, and ethical use of AI systems. Intuitively, jailbreaks aim to narrow the gap between what the model is constrained to generate, because of factors such as alignment, and the full breadth of what it is technically able to produce. At their core, jailbreaks exploit the flexibility and contextual understanding capabilities of LLMs. While these models are typically designed with safeguards and ethical guidelines, their ability to adapt to various contexts and instructions can be turned against them.


The Strategic Transformation: When Bottom-Up Meets Top-Down Innovation

The most innovative organizations aren’t always purely top-down or bottom-up—they carefully orchestrate combinations of both. Strategic leadership provides direction and resources, while grassroots innovation offers practical insights and the capability to adapt rapidly. Chynoweth noted how strategic portfolio management helps companies “keep their investments in tech aligned to make sure they’re making the right investments.” The key is creating systems that can channel bottom-up innovations while ensuring they support the organization’s strategic objectives. Organizations that succeed in managing both top-down and bottom-up innovation typically have several characteristics. They establish clear strategic priorities from leadership while creating space for experimentation and adaptation. They implement systems for capturing and evaluating innovations regardless of their origin. And they create mechanisms for scaling successful pilots while maintaining strategic alignment. The future belongs to enterprises that can master this balance. Pure top-down enterprises will likely continue to struggle with implementation realities and changing market conditions. In contrast, pure bottom-up organizations would continue to lack the scale and coordination needed for significant impact.


Digital-first doesn’t mean disconnected for this CEO and founder

“Digital-first doesn’t mean disconnected – it means being intentional,” she said. For leaders it creates a culture where the people involved feel supported, wherever they’re working, she thinks. She adds that while many organisations found themselves in a situation where the pandemic forced them to establish a remote-first system, very few actually fully invested in making it work well. “High performance and innovation don’t happen in isolation,” said Feeney. “They happen when people feel connected, supported and inspired.” Sentiments which she explained are no longer nice to have, but are becoming a part of modern organisational infrastructure. One in which people are empowered to do their best work on their own terms. ... “One of the biggest challenges I have faced as a founder was learning to slow down, especially when eager to introduce innovation. Early on, I was keen to implement automation and technology, but I quickly realised that without reliable data and processes, these tools could not reach their full potential.” What she learned was, to do things correctly, you have to stop, review your foundations and processes and when you encounter an obstacle, deal with it, because though the stopping and starting might initially be frustrating, you can’t overestimate the importance of clean data, the right systems and personnel alignment with new tech.

Daily Tech Digest - February 01, 2025


Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry


5 reasons the enterprise data center will never die

Cloud repatriation — enterprises pulling applications back from the cloud to the data center — remains a popular option for a variety of reasons. According to a June 2024 IDC survey, about 80% of 2,250 IT decision-maker respondents “expected to see some level of repatriation of compute and storage resources in the next 12 months.” IDC adds that the six-month period between September 2023 and March 2024 saw increased levels of repatriation plans “across both compute and storage resources for AI lifecycle, business apps, infrastructure, and database workloads.” ... According to Forrester’s 2023 Infrastructure Cloud Survey, 79% of roughly 1,300 enterprise cloud decision-makers said their firms are implementing internal private clouds, which will use virtualization and private cloud management. Nearly a third (31%) of respondents said they are building internal private clouds using hybrid cloud management solutions such as software-defined storage and API-consistent hardware to make the private cloud more like the public cloud, Forrester adds. ... “Edge is a crucial technology infrastructure that extends and innovates on the capabilities found in core datacenters, whether enterprise- or service-provider-oriented,” says IDC. The rise of edge computing shatters the binary “cloud-or-not-cloud” way of thinking about data centers and ushers in an “everything everywhere all at once” distributed model


How to Understand and Manage Cloud Costs with a Data-Driven Strategy

Understanding your cloud spend starts with getting serious about data. If your cloud usage grew organically across teams over time, you're probably staring at a bill that feels more like a puzzle than a clear financial picture. You know you're paying too much, and you have an idea of where the spending is happening across compute, storage, and networking, but you are not sure which teams are overspending, which applications are being overprovisioned, and so on. Multicloud environments add even another layer of complexity to data visibility. ... With a holistic view of your data established, the next step is augmenting tools to gain a deeper understanding of your spending and application performance. To achieve this, consider employing a surgical approach by implementing specialized cost management and performance monitoring tools that target specific areas of your IT infrastructure. For example, granular financial analytics can help you identify and eliminate unnecessary expenses with precision. Real-time visibility tools provide immediate insights into cost anomalies and performance issues, allowing for prompt corrective actions. Governance features ensure that spending aligns with budgetary constraints and compliance requirements, while integration capabilities with existing systems facilitate seamless data consolidation and analysis across different platforms. 


Top cybersecurity priorities for CFOs

CFOs need to be aware of the rising threats of cyber extortion, says Charles Soranno, a managing director at global consulting firm Protiviti. “Cyber extortion is a form of cybercrime where attackers compromise an organization’s systems, data or networks and demand a ransom to return to normal and prevent further damage,” he says. Beyond a ransomware attack, where data is encrypted and held hostage until the ransom is paid, cyber extortion can involve other evolving threats and tactics, Soranno says. “CFOs are increasingly concerned about how these cyber extortion schemes impact lost revenue, regulatory fines [and] potential payments to bad actors,” he says. ... “In collaboration with other organizational leaders, CFOs must assess the risks posed by these external partners to identify vulnerabilities and implement a proactive mitigation and response plan to safeguard from potential threats and issues.” While a deep knowledge of the entire supply chain’s cybersecurity posture might seem like a luxury for some organizations, the increasing interconnectedness of partner relationships is making third-party cybersecurity risk profiles more of a necessity, Krull says. “The reliance on third-party vendors and cloud services has grown exponentially, increasing the potential for supply chain attacks,” says Dan Lohrmann, field CISO at digital services provider Presidio. 


GDPR authorities accused of ‘inactivity’

The idea that the GDPR has brought about a shift towards a serious approach to data protection has largely proven to be wishful thinking, according to a statement from noyb. “European data protection authorities have all the necessary means to adequately sanction GDPR violations and issue fines that would prevent similar violations in the future,” Schrems says. “Instead, they frequently drag out the negotiations for years — only to decide against the complainant’s interests all too often.” ... “Somehow it’s only data protection authorities that can’t be motivated to actually enforce the law they’re entrusted with,” criticizes Schrems. “In every other area, breaches of the law regularly result in monetary fines and sanctions.” Data protection authorities often act in the interests of companies rather than the data subjects, the activist suspects. It is precisely fines that motivate companies to comply with the law, reports the association, citing its own survey. Two-thirds of respondents stated that decisions by the data protection authority that affect their own company and involve a fine lead to greater compliance. Six out of ten respondents also admitted that even fines imposed on other organizations have an impact on their own company. 


The three tech tools that will take the heat off HR teams in 2025

As for the employee review process, a content services platform enables HR employees to customise processes, routing approvals to the right managers, department heads, and people ops. This means that employee review processes can be expedited thanks to customisable forms, with easier goal setting, identification of upskilling opportunities, and career progression. When paperwork and contracts are uniform, customisable, and easily located, employers are equipped to support their talent to progress as quickly as possible – nurturing more fulfilled employees who want to stick around. ... Naturally, a lot of HR work is form-heavy, with anything from employee onboarding and promotions to progress reviews and remote working requests requiring HR input. However, with a content services platform, HR professionals can route and approve forms quickly, speeding up the process with digital forms that allow employees to enter information quickly and accurately. Going one step further, HR leaders can leverage automated workflows to route forms to approvers as soon as an employee completes them – cutting out the HR intermediary. ... Armed with a single source of truth, HR professionals can take advantage of automated workflows, enabling efficient notifications and streamlining HR compliance processes.


AI Could Turn Against You — Unless You Fix Your Data Trust Issues

Without unified standards for data formats, definitions, and validations, organizations struggle to establish centralized control. Legacy systems, often ill-equipped to handle modern data volumes, further exacerbate the problem. These systems were designed for periodic updates rather than the continuous, real-time streams demanded by AI, leading to inefficiencies and scalability limitations. To address these challenges, organizations must implement centralized governance, quality, and observability within a single framework. This enables them to leverage data lineage and track their data as it moves through systems to ensure transparency and identify issues in real-time. It also ensures they can regularly validate data integrity to support consistent, reliable AI models by conducting real-time quality checks. ... For organizations to maximize the potential of AI, they must embed data trust into their daily operations. This involves using automated systems like data observability to validate data integrity throughout its lifecycle, integrated governance to maintain reliability, and assuring continuous validation within evolving data ecosystems. By addressing data quality challenges and investing in unified platforms, organizations can transform data trust into a strategic advantage. 


Backdoor in Chinese-made healthcare monitoring device leaks patient data

“By reviewing the firmware code, the team determined that the functionality is very unlikely to be an alternative update mechanism, exhibiting highly unusual characteristics that do not support the implementation of a traditional update feature,” CISA said in its analysis report. “For example, the function provides neither an integrity checking mechanism nor version tracking of updates. When the function is executed, files on the device are forcibly overwritten, preventing the end customer — such as a hospital — from maintaining awareness of what software is running on the device.” In addition to this hidden remote code execution behavior, CISA also found that once the CMS8000 completes its startup routine, it also connects to that same IP address over port 515, which is normally associated with the Line Printer Daemon (LPD), and starts transmitting patient information without the device owner’s knowledge. “The research team created a simulated network, created a fake patient profile, and connected a blood pressure cuff, SpO2 monitor, and ECG monitor peripherals to the patient monitor,” the agency said. “Upon startup, the patient monitor successfully connected to the simulated IP address and immediately began streaming patient data to the address.”


3 Considerations for Mutual TLS (mTLS) in Cloud Security

Traditional security approaches often rely on IP whitelisting as a primary method of access control. While this technique can provide a basic level of security, IP whitelists operate on a fundamentally flawed assumption: that IP addresses alone can accurately represent trusted entities. In reality, this approach fails to effectively model real-world attack scenarios. IP whitelisting provides no mechanism for verifying the integrity or authenticity of the connecting service. It merely grants access based on network location, ignoring crucial aspects of identity and behavior. In contrast, mTLS addresses these shortcomings by focusing on cryptographic identity(link is external) rather than network location. ... In the realm of mTLS, identity is paramount. It's not just about encrypting data in transit; it's about ensuring that both parties in a communication are exactly who they claim to be. This concept of identity in mTLS warrants careful consideration. In a traditional network, identity might be tied to an IP address or a shared secret. But, in the modern world of cloud-native applications, these concepts fall short. mTLS shifts the mindset by basing identity on cryptographic certificates. Each service possesses its own unique certificate, which serves as its identity card.


Artificial Intelligence Versus the Data Engineer

It’s worth noting that there is a misconception that AI can prepare data for AI, when the reality is that, while AI can accelerate the process, data engineers are still needed to get that data in shape before it reaches the AI processes and models and we see the cool end results. At the same time, there are AI tools that can certainly accelerate and scale the data engineering work. So AI is both causing and solving the challenge in some respects! So, how does AI change the role of the data engineer? Firstly, the role of the data engineer has always been tricky to define. We sit atop a large pile of technology, most of which we didn’t choose or build, and an even larger pile of data we didn’t create, and we have to make sense of the world. Ostensibly, we are trying to get to something scientific. ... That art comes in the form of the intuition required to sift through the data, understand the technology, and rediscover all the little real-world nuances and history that over time have turned some lovely clean data into a messy representation of the real world. The real skill great data engineers have is therefore not the SQL ability but how they apply it to the data in front of them to sniff out the anomalies, the quality issues, the missing bits and those historical mishaps that must be navigated to get to some semblance of accuracy.


How engineering teams can thrive in 2025

Adopting a "fail forward" mentality is crucial as teams experiment with AI and other emerging technologies. Engineering teams are embracing controlled experimentation and rapid iteration, learning from failures and building knowledge. ... Top engineering teams will combine emerging technologies with new ways of working. They’re not just adopting AI—they’re rethinking how software is developed and maintained as a result of it. Teams will need to stay agile to lead the way. Collaboration within the business and access to a multidisciplinary talent base is the recipe for success. Engineering teams should proactively scenario plan to manage uncertainty by adopting agile frameworks like the "5Ws" (Who, What, When, Where, and Why.) This approach allows organizations to tailor tech adoption strategies and marry regulatory compliance with innovation. Engineering teams should also actively address AI bias and ensure fair and responsible AI deployment. Many enterprises are hiring responsible AI specialists and ethicists as regulatory standards are now in force, including the EU AI Act, which impacts organizations with users in the European Union. As AI improves, the expertise and technical skills that proved valuable before need to be continually reevaluated. Organizations that successfully adopt AI and emerging tech will thrive. 


Daily Tech Digest - January 28, 2025


Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki


How Long Does It Take Hackers to Crack Modern Hashing Algorithms?

Because hashing algorithms are one-way functions, the only method to compromise hashed passwords is through brute force techniques. Cyber attackers employ special hardware like GPUs and cracking software (e.g., Hashcat, L0phtcrack, John The Ripper) to execute brute force attacks at scale—typically millions or billions or combinations at a time. Even with these sophisticated purpose-built cracking tools, password cracking times can vary dramatically depending on the specific hashing algorithm used and password length/character combination. ... With readily available GPUs and cracking software, attackers can instantly crack numeric passwords of 13 characters or fewer secured by MD5's 128-bit hash; on the other hand, an 11-character password consisting of numbers, uppercase/lowercase characters, and symbols would take 26.5 thousand years. ... When used with long, complex passwords, SHA256 is nearly impenetrable using brute force methods— an 11 character SHA256 hashed password using numbers, upper/lowercase characters, and symbols takes 2052 years to crack using GPUs and cracking software. However, attackers can instantly crack nine character SHA256-hashed passwords consisting of only numeric or lowercase characters.


Sharply rising IT costs have CIOs threading the needle on innovation

“Within two years, it will be virtually impossible to buy a PC, tablet, laptop, or mobile phone without AI,” Lovelock says. “Whether you want it or not, you’re going to get it sold to you.” Vendors have begun to build AI into software as well, he says, and in many cases, charge customers for the additional functionality. IT consulting services will also add AI-based services to their portfolios. ... But the biggest expected price hikes are for cloud computing services, despite years of expectations that cloud prices wouldn’t increase significantly, Lovelock says. “For many years, CIOs were taught that in the cloud, either prices went down, or you got more functionality, and occasionally both, that the economies of scale accrue to the cloud providers and allow for at least stable prices, if not declines or functional expansion,” he says. “It wasn’t until post-COVID in the energy crisis, followed by staff cost increases, when that story turned around.” ... “Generative AI is no longer seen as a one-size-fits-all solution, and this shift is helping both solutions providers and businesses take a more practical approach,” he says. “We don’t see this as a sign of lower expectations but as a move toward responsible and targeted use of generative AI.”


US takes aim at healthcare cybersecurity with proposed HIPAA changes

The major update to the HIPAA security regulations also requires healthcare organizations to strengthen security incident response plans and procedures, carry out annual penetration tests and compliance audits, among other measures. Many of the proposals cover best practice enterprise security guidelines foundational to any mature cybersecurity program. ... Cybersecurity experts praised the shift to a risk-based approach covered by the security rule revamp, while some expressed concerns that the measures might tax the financial resources of smaller clinics and healthcare providers. “The security measures called for in the proposed rule update are proven to be effective and will mitigate many of the risks currently present in the poorly protected environments of many healthcare payers, providers, and brokers,” said Maurice Uenuma, VP & GM for the Americas and security strategist at data security firm Blancco. ... Uenuma added: “The challenge will be to implement these measures consistently at scale.” Trevor Dearing, director of critical infrastructure at enterprise security tools firm Illumio, praised the shift from prevention to resilience and the risk-based approach implicit in the rule changes, which he compared to the EU’s recently introduced DORA rules for financial sector organizations.


Risk resilience: Navigating the risks that board’s can’t ignore in 2025

The geopolitical landscape is more turbulent than ever. Companies will need to prepare for potential shocks like regional conflicts, supply chain disruptions, or even another pandemic. If geopolitical risks feel dizzyingly complex, scenario planning will be a powerful tool in mapping out different political and economic scenarios. By envisioning various outcomes, boards can better understand their vulnerabilities, prepare tailored responses and enhance risk resilience. To prepare for the year ahead, board and management teams should ask questions such as: How exposed are we to geopolitical risks in our supply chain? Are we engaging effectively with local governments in key regions?  ... The risks of 2025 are formidable, but so are the opportunities for those who lead with purpose. With informed leadership and collaboration, we can navigate the complexities of the modern business environment with confidence and resilience. Resilience will be the defining trait of successful boards and businesses in the years ahead. It requires not only addressing known risks but also preparing for the unexpected. By prioritising scenario planning, fostering a culture of transparency, and aligning risk management with strategic goals, boards can navigate uncertainty with confidence.


Freedom from Cyber Threats: An AI-powered Republic on the Rise

Developing a resilient AI-driven cybersecurity infrastructure requires substantial investment. The Indian government’s allocation of over ₹550 crores to AI research demonstrates its commitment to innovation and data security. Collaborations with leading cybersecurity companies exemplify scalable solutions to secure digital ecosystems, prioritising resilience, ethical governance, and comprehensive data protection. Research tools like the Gartner Magic Quadrant also offer reliable and useful insights into the leading companies that offer the best and latest SIEM technology solutions. Upskilling the workforce is equally important. Training programs focused on AI-specific cybersecurity skills are preparing India’s talent pool to tackle future challenges effectively. ... Proactive strategies are essential to counter the evolution of cyber threats. Simulation tools enable organizations to anticipate and neutralise potential vulnerabilities. Now, cybersecurity threats can be intercepted by high-class threat detection SIEM data clouds and autonomous threat sweeps. Advanced threat research, conducted by dedicated labs within organisations, plays a crucial role in uncovering emerging attack vectors and providing actionable insights to pre-empt potential breaches. 


Enterprises are hitting a 'speed limit' in deploying Gen AI - here's why

The regulatory issue, the report states, makes clear "respondents' unease about which use cases will be acceptable, and to what extent their organizations will be held accountable for Gen AI-related problems." ... The latest iteration was conducted in July through September, and received 2,773 responses from "senior leaders in their organizations and included board and C-suite members, and those at the president, vice president, and director level," from 14 countries, including the US, UK, Brazil, Germany, Japan, Singapore, and Australia, and across industries including energy, finance, healthcare, and media and telecom. ... Despite the slow pace, Deloitte's CTO is confident in the continued development, and ultimate deployment, of Gen AI. "GenAI and AI broadly is our reality -- it's not going away," writes Bawa. Gen AI is ultimately like the Internet, cloud computing, and mobile waves that preceded it, he asserts. Those "transformational opportunities weren't uncovered overnight," he says, "but as they became pervasive, they drove significant disruption to business and technology capabilities, and also triggered many new business models, new products and services, new partnerships, and new ways of working and countless other innovations that led to the next wave across industries."


NVMe-oF Substantially Reduces Data Access Latency

NVMe-oF is a network protocol that extends the parallel access and low latency features of Nonvolatile Memory Express (NVMe) protocol across networked storage. Originally designed for local storage and common in direct-attached storage (DAS) architectures, NVMe delivers high-speed data access and low latency by directly interfacing with solid-state disks. NVMe-oF allows these same advantages to be achieved in distributed and clustered environments by enabling external storage to perform as if it were local. ... Storage targets can be dynamically shared among workloads, thus providing composable storage resources that provide flexibility, agility and greater resource efficiency. The adoption of NVMe-oF is evident across industries where high performance, efficiency and low latency at scale are critical. Notable market sectors include: financial services, e-commerce, AI and machine learning, and specialty cloud service providers (CSPs). Legacy VM migration, real-time analytics, high-frequency trading, online transaction processing (OLTP) and the rapid development of cloud native, performance-intensive workloads at scale are use cases that have compelled organizations to modernize their data platforms with NVMe-oF solutions. Its ability to handle massive data flows with efficiency and high-performance makes it indispensable for I/O-intensive workloads.


The crisis of AI’s hidden costs

Let me paint you a picture of what keeps CFOs up at night. Imagine walking into a massive data center where 87% of the computers sit there, humming away, doing nothing. Sounds crazy, right? That’s exactly what’s happening in your cloud environment. If you manage a typical enterprise cloud computing operation, you are wasting money. It’s not rare to see companies spend $1 million monthly on cloud resources, with 75% to 80% of that amount going right out the window. It’s no mystery what this means for your bottom line. ... Smart enterprises aren’t just hoping the problem will disappear; they’re taking action. Here’s my advice: Don’t rely solely on the basic tools offered by your cloud provider; they won’t give you the immediate cost visibility you need. Instead, invest in third-party solutions that provide a clear, up-to-the-minute picture of your resource utilization. Focus on power-hungry GPUs running AI workloads. ... Rather than spinning up more instances, consider rightsizing. Modern instance types offered by public cloud providers can give you more bang for your buck. ... Predictive analytics can help you scale up or down based on demand, ensuring you’re not paying for idle resources. ... Be strategic and look at the bigger picture. Evaluate reserved instances and savings plans to balance cost and performance. 


AI security posture management will be needed before agentic AI takes hold

We’ve run into these issues when most companies shifted their workloads to the cloud. Authentication issues – like the dreaded S3 bucket that had a default public setting and that was the cause of way too many breaches before it was secure by default – became the domain of cloud security posture management (CSPM) tools before they were swallowed up by the CNAPP acronym. Identity and permission issues (or entitlements, if you prefer) became the alphabet soup of CIEM (cloud identity entitlement management), thankfully now also under the umbrella of CNAPP. AI bots will need to be monitored by similar toolsets, but they don’t exist yet. I’ll go out on a limb and suggest SAFAI (pronounced Sah-fy) as an acronym: Security Assessment Frameworks for AI. These would, much like CNAPP tools, embed themselves in agentless or transparent fashion, crawl through your AI bots collecting configuration, authentication and permission issues and highlight the pain points. You’d still need the standard panoply of other tools to protect you, since they sit atop the same infrastructure. And that’s on top of worrying about prompt injection opportunities, which is something you unfortunately have no control over as they are based entirely on the models and how they are used.


Hackers Use Malicious PDFs, pose as USPS in Mobile Phishing Scam

The bad actors make the malicious PDFs look like communications from the USPS that are sent via SMS text messages and use what the researchers called in a report Monday a “never-before-seen means of obfuscation” to help them bypass traditional security controls. They embed the malicious links in the PDF, essentially hiding them from endpoint security solutions. ... The phishing attacks are part of a larger and growing trend of what Zimperium calls “mishing,” an umbrella word for campaigns that use email, text messages, voice calls, or QR codes that exploit such weaknesses as unsafe user behavior and minimal security on many mobile devices to infiltrate corporate networks and steal information. ... “We’re witnessing phishing evolve in real time beyond email into a sophisticated multi-channel threat, with attackers leveraging trusted brands like USPS, Royal Mail, La Poste, Deutsche Post, and Australian Post to exploit limited mobile device security worldwide,” Kowski said. “The discovery of over 20 malicious PDFs and 630 phishing pages targeting organizations across 50+ countries shows how threat actors capitalize on users’ trust in official-looking communications on mobile devices.” He also noted that internal disagreements are hampering corporations’ ability to protect against such attacks.