Showing posts with label policy. Show all posts
Showing posts with label policy. Show all posts

Daily Tech Digest - August 23, 2025


Quote for the day:

"Failure is the condiment that gives success its flavor." -- Truman Capote


Enterprise passwords becoming even easier to steal and abuse

Attackers actively target user credentials because they offer the most direct route or foothold into a targeted organization’s network. Once inside, attackers can move laterally across systems, searching for other user accounts to compromise, or they attempt to escalate their privileges and gain administrative control. This hunt for credentials extends beyond user accounts to include code repositories, where developers may have hard-coded access keys and other secrets into application source code. Attacks using valid credentials were successful 98% of the time, according to Picus Security. ... “CISOs and security teams should focus on enforcing strong, unique passwords, using MFA everywhere, managing privileged accounts rigorously and testing identity controls regularly,” Curran says. “Combined with well-tuned DLP and continuous monitoring that can detect abnormal patterns quickly, these measures can help limit the impact of stolen or cracked credentials.” Picus Security’s latest findings reveal a concerning gap between the perceived protection of security tools and their actual performance. An overall protection effectiveness score of 62% contrasts with a shockingly low 3% prevention rate for data exfiltration. “Failures in detection rule configuration, logging gaps and system integration continue to undermine visibility across security operations,” according to Picus Security.


Architecting the next decade: Enterprise architecture as a strategic force

In an age of escalating cyber threats and expanding digital footprints, security can no longer be layered on; it must be architected in from the start. With the rise of AI, IoT and even quantum computing on the horizon, the threat landscape is more dynamic than ever. Security-embedded architectures prioritize identity-first access control, continuous monitoring and zero-trust principles as baseline capabilities. ... Sustainability is no longer a side initiative; it’s becoming a first principle of enterprise architecture. As organizations face pressure from regulators, investors and customers to lower their carbon footprint, digital sustainability is gaining traction as a measurable design objective. From energy-efficient data centers to cloud optimization strategies and greener software development practices, architects are now responsible for minimizing the environmental impact of IT systems. The Green Software Foundation has emerged as a key ecosystem partner, offering measurement standards like software carbon intensity (SCI) and tooling for emissions-aware development pipelines. ... Technology leaders must now foster a culture of innovation, build interdisciplinary partnerships and enable experimentation while ensuring alignment with long-term architectural principles. They must guide the enterprise through both transformation and stability, navigating short-term pressures and long-term horizons simultaneously.


Capitalizing on Digital: Four Strategic Imperatives for Banks and Credit Unions

Modern architectures dissolve the boundary between core and digital. The digital banking solution is no longer a bolt-on to the core; the core and digital come together to form the accountholder experience. That user experience is delivered through the digital channel, but when done correctly, it’s enabled by the modern core. Among other things, the core transformation requires robust use of shared APIs, consistent data structures, and unified development teams. Leading financial institutions are coming to realize that core evaluations now must include an evaluable of their capability to enable the digital experience. Criteria like Availability, Reliability, Real-time, Speed and Security are now emerging as foundational requirements of a core to enable the digital experience. "If your core can’t keep up with your digital, you’re stuck playing catch-up forever," said Jack Henry’s Paul Wiggins, Director of Sales, Digital Engineers. ... Many institutions still operate with digital siloed in one department, while marketing, product, and operations pursue separate agendas. This leads to mismatched priorities — products that aren’t promoted effectively, campaigns that promise features operations can’t support, and technical fixes that don’t address the root cause of customer and member pain points. ... Small-business services are a case in point. Jack Henry’s Strategy Benchmark study found that 80% of CEOs plan to expand these services over the next two years. 


Bentley Systems CIO Talks Leadership Strategy and AI Adoption

The thing that’s really important for a CIO to be thinking about is that we are a microcosm for how all of the business functions are trying to execute the tactics against the strategy. What we can do across the portfolio is represent the strategy in real terms back to the business. We can say: These are all of the different places where we're thinking about investing. Does that match with the strategy we thought we were setting for ourselves? And where is there a delta and a difference? ... When I got my first CIO role, there was all of this conversation about business process. That was the part that I had to learn and figure out how to map into these broader, strategic conversations. I had my first internal IT role at Deutsche Bank, where we really talked about product model a lot -- thinking about our internal IT deliverables as products. When I moved to Lenovo, we had very rich business process and transformation conversations because we were taking the whole business through such a foundational change. I was able to put those two things together. It was a marriage of several things: running a product organization; marrying that to the classic IT way of thinking about business process; and then determining how that becomes representative to the business strategy.


What Is Active Metadata and Why Does It Matter?

Active metadata addresses the shortcomings of passive approaches by automatically updating the metadata whenever an important aspect of the information changes. Defining active metadata and understanding why it matters begins by looking at the shift in organizations’ data strategies from a focus on data acquisition to data consumption. The goal of active metadata is to promote the discoverability of information resources as they are acquired, adapted, and applied over time. ... From a data consumer’s perspective, active metadata adds depth and breadth to their perception of the data that fuels their decision-making. By highlighting connections between data elements that would otherwise be hidden, active metadata promotes logical reasoning about data assets. This is especially so when working on complex problems that involve a large number of disconnected business and technical entities.The active metadata analytics workflow orchestrates metadata management across platforms to enhance application integration, resource management, and quality monitoring. It provides a single, comprehensive snapshot of the current status of all data assets involved in business decision-making. The technology augments metadata with information gleaned from business processes and information systems. 


Godrej Enterprises CHRO on redefining digital readiness as culture, not tech

“Digital readiness at Godrej Enterprises Group is about empowering every employee to thrive in an ever-evolving landscape,” Kaur said. “It’s not just about technology adoption. It’s about building a workforce that is agile, continuously learning, and empowered to innovate.” This reframing reflects a broader trend across Indian industry, where digital transformation is no longer confined to IT departments but runs through every layer of an organisation. For Godrej Enterprises Group, this means designing a workplace where intrapreneurship is rewarded, innovation is constant, and employees are trained to think beyond immediate functions. ... “We’ve moved away from one-off training sessions to creating a dynamic ecosystem where learning is accessible, relevant, and continuous,” she said. “Learning is no longer a checkbox — it’s a shared value that energises our people every day.” This shift is underpinned by leadership development programmes and innovation platforms, ensuring that employees at every level are encouraged to experiment and share knowledge.  ... “We see digital skilling as a core business priority, not just an HR or L&D initiative,” she said. “By making digital skilling a shared responsibility, we foster a culture where learning is continuous, progress is visible, and success is celebrated across the organisation.”


AI is creeping into the Linux kernel - and official policy is needed ASAP

However, before you get too excited, he warned: "This is a great example of what LLMs are doing right now. You give it a small, well-defined task, and it goes and does it. And you notice that this patch isn't, 'Hey, LLM, go write me a driver for my new hardware.' Instead, it's very specific -- convert this specific hash to use our standard API." Levin said another AI win is that "for those of us who are not native English speakers, it also helps with writing a good commit message. It is a common issue in the kernel world where sometimes writing the commit message can be more difficult than actually writing the code change, and it definitely helps there with language barriers." ... Looking ahead, Levin suggested LLMs could be trained to become good Linux maintainer helpers: "We can teach AI about kernel-specific patterns. We show examples from our codebase of how things are done. It also means that by grounding it into our kernel code base, we can make AI explain every decision, and we can trace it to historical examples." In addition, he said the LLMs can be connected directly to the Linux kernel Git tree, so "AI can go ahead and try and learn things about the Git repo all on its own." ... This AI-enabled program automatically analyzes Linux kernel commits to determine whether they should be backported to stable kernel trees. The tool examines commit messages, code changes, and historical backporting patterns to make intelligent recommendations.


Applications and Architecture – When It’s Not Broken, Should You Try to Fix It?

No matter how reliable your application components are, they will need to be maintained, upgraded or replaced at some point. As elements in your application evolve, some will reach end of life status – for example, Redis 7.2 will reach end of life status for security updates in February 2026. Before that point, it’s necessary to assess the available options. For businesses in some sectors like financial services, running out of date and unsupported software is a potential failure for regulations on security and resilience. For example, the Payment Card Industry Data Security Standard version 4.0 enforces that teams should check all their software and hardware is supported every year; in the case of end of life software, teams must also provide a full plan for migration that will be completed within twelve months. ... For developers and software architects, understanding the role that any component plays in the overall application makes it easier to plan ahead. Even the most reliable and consistent component may need to change given outside circumstances. In the Discworld series, golems are so reliable that they become the standard for currency; at the same time, there are so many of them that any problem could affect the whole economy. When it comes to data caching, Redis has been a reliable companion for many developers. 


From cloud migration to cloud optimization

The report, based on insights from more than 2,000 IT leaders, reveals that a staggering 94% of global IT leaders struggle with cloud cost optimization. Many enterprises underestimate the complexities of managing public cloud resources and the inadvertent overspending that occurs from mismanagement, overprovisioning, or a lack of visibility into resource usage. This inefficiency goes beyond just missteps in cloud adoption. It also highlights how difficult it is to align IT cost optimization with broader business objectives. ... This growing focus sheds light on the rising importance of finops (financial operations), a practice aimed at bringing greater financial accountability to cloud spending. Adding to this complexity is the increasing adoption of artificial intelligence and automation tools. These technologies drive innovation, but they come with significant associated costs. ... The argument for greater control is not new, but it has gained renewed relevance when paired with cost optimization strategies. ... With 41% of respondents’ IT budgets still being directed to scaling cloud capabilities, it’s clear that the public cloud will remain a cornerstone of enterprise IT in the foreseeable future. Cloud services such as AI-powered automation remain integral to transformative business strategies, and public cloud infrastructure is still the preferred environment for dynamic, highly scalable workloads. Enterprises will need to make cloud deployments truly cost-effective.


The Missing Layer in AI Infrastructure: Aggregating Agentic Traffic

Software architects and engineering leaders building AI-native platforms are starting to notice familiar warning signs: sudden cost spikes on AI API bills, bots with overbroad permissions tapping into sensitive data, and a disconcerting lack of visibility or control over what these AI agents are doing. It’s a scenario reminiscent of the early days of microservices – before we had gateways and meshes to restore order – only now the "microservices" are semi-autonomous AI routines. Gartner has begun shining a spotlight on this emerging gap. ... Every major shift in software architecture eventually demands a mediation layer to restore control. When web APIs took off, API gateways became essential for managing authentication/authorization, rate limits, and policies. With microservices, service meshes emerged to govern internal traffic. Each time, the need only became clear once the pain of scale surfaced. Agentic AI is on the same path. Teams are wiring up bots and assistants that call APIs independently - great for demos ... So, what exactly is an AI Gateway? At its core, it’s a middleware component – either a proxy, service, or library – through which all AI agent requests to external services are channeled. Rather than letting each agent independently hit whatever API it wants, you route those calls via the gateway, which can then enforce policies and provide central management.



Daily Tech Digest - July 08, 2025


Quote for the day:

“If you really want the key to success, start by doing the opposite of what everyone else is doing.” -- Brad Szollose


MCP Vulnerability Exposes the AI Untrusted Code Crisis

Most organizations have rigorous approval processes before allowing arbitrary code to run in their environments whether from open source projects or vendor solutions. Yet with this new wave of tools, we’re simultaneously allowing thousands of employees to constantly update codebases with arbitrary, untrusted AI-generated code or wiring said codebases and applications to mechanisms that can alter or modify their behavior. This isn’t about stopping the use of AI coding agents or sacrificing the massive productivity gains they provide. Instead, we should standardize better ways that allow us to run untrusted code across our software development pipelines. ... As AI development tools gain adoption across enterprises, there is a new class of systems to support them that can execute code on behalf of developers. This includes AI code assistants generating and running code snippets, MCP servers providing AI systems access to local tools and data, automated testing tools executing AI-generated test cases and development agents performing complex multistep operations. Each of these represents a potential code execution pathway that often bypasses traditional security controls. The risk isn’t just that AI-generated code can be inadvertently malicious; it’s that these new systems also create pathways for untrusted code execution.


Is English the next programming language? JetBrains’ CEO says no

JetBrains does need to contend with the fact that many of its users are being threatened by AI replacing them, even if he notes that job displacement isn’t happening at anywhere near the rate some have suggested. Products, languages and IT infrastructure can indeed be made redundant too. We may also add that many layoff rounds use AI as an excuse to make cuts that are simply financially motivated. Still, we need to appreciate that AI is indeed changing the overall landscape. Tasks can be automated, and AI is eagerly shoveling up the developer code that’s freely available online. What about Kotlin specifically?  ... “Here’s my vision. I think programming languages will evolve a lot. I admit that you may not need high level programming languages in the classical sense anymore, but the solution still wouldn’t be English.” Skrygan envisions a middle ground between Kotlin and natural language. Currently, the closest approximation is Kotlin DSL. It’s a design doc that can be compiled as code. Ultimately, like anything digital, it converts into binary at the lowest level. The JetBrains CEO highlights how this is merely a repeat of what we’ve already seen: “People were writing in bytecode and assembler 40 years ago. Now, nobody cares about it anymore. It’s secondary.”

Privacy is blockchain’s missing link—and America’s opportunity to lead

We are at an inflection point. On one hand, blockchain has evolved from an experimental idea into a foundational layer for decentralized finance (DeFi), gaming, cross-border payments, and digital identity. On the other, the absence of privacy threatens to stall its momentum. Without privacy guarantees, Web3 won’t scale into a secure, inclusive internet economy—it will remain a risky, self-surveilling shadow of its potential. It’s not just user safety at stake. Institutional adoption, long seen as the tipping point for crypto’s maturation, is lagging in part because privacy solutions are underdeveloped. Financial institutions and enterprises cannot embrace systems that force them to reveal business-sensitive transactions to competitors and regulators alike. Privacy is not the enemy of compliance; it’s a prerequisite for serious engagement. ... First, policymakers must move past the false binary of privacy versus compliance. These are not mutually exclusive goals. Clear guidelines that embrace advanced cryptography, establish safe harbors for privacy-preserving innovation, and differentiate between consumer protection and surveillance will enable the next generation of secure digital finance. Second, industry leaders need to elevate privacy to the level of consensus mechanisms, scalability, and user experience. 


How scientists are trying to use AI to unlock the human mind

In one of the studies, researchers transformed a large language model into what they refer to as a “foundation model of human cognition.” Out of the box, large language models aren’t great at mimicking human behavior—they behave logically in settings where humans abandon reason, such as casinos. So the researchers fine-tuned Llama 3.1, one of Meta’s open-source LLMs, on data from a range of 160 psychology experiments, which involved tasks like choosing from a set of “slot machines” to get the maximum payout or remembering sequences of letters. ... Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. ... The second of the two Nature studies focuses on minuscule neural networks—some containing only a single neuron—that nevertheless can predict behavior in mice, rats, monkeys, and even humans. Because the networks are so small, it’s possible to track the activity of each individual neuron and use that data to figure out how the network is producing its behavioral predictions. 


New Study Reveals True AI Capabilities And Job Replacement Risk

For business leaders, this framework offers something really valuable: a reality check that cuts through vendor marketing speak. When a sales representative promises their AI solution will "revolutionize your operations," you can now ask pointed questions about which capability levels their system actually achieves and in which specific domains. The gap analysis between current AI capabilities and the requirements of specific business tasks becomes clearer when standardized benchmarks are in place. Consider customer service, where companies are deploying AI chatbots with the enthusiasm of gold rush prospectors. The OECD framework suggests that while AI can handle structured interactions reasonably well, anything requiring genuine social intelligence, nuanced problem-solving, or creative thinking quickly exposes current limitations. This doesn't mean AI isn't useful in customer service, but it helps set realistic expectations about what human oversight will still be necessary. It's the difference between using AI as a sophisticated tool versus expecting it to be a replacement employee. One approach leads to productivity gains; the other leads to customer complaints and public relations disasters.


Why EU Policy Must Catch Up to the Neurotechnology Boom

After conducting a comprehensive analysis of nearly 300 neurotechnology companies worldwide, the Center for Future Generations discovered a surprising trend: among firms fully dedicated to neurotech, consumer firms now outnumber medical ones, making up 60% of the global neurotechnology landscape. And they're proliferating at an unprecedented rate—more than quadrupling in the past decade compared to the previous 25 years. ... EEG, the technology at the heart of this revolution, has been around since the 1920s. It's crude and can't read individual thoughts, but it can detect patterns of brain activity related to focus, fatigue, and even emotional states. And when coupled with artificial intelligence and other personal data—like location, buying behaviors, and biometrics—these patterns can reveal far more about us than we might imagine. ... As this technology moves into the mainstream, the potential for misuse becomes profound. Imagine pre-election advertising that adapts its messaging based on your emotional reaction. Imagine disinformation campaigns tailored to your subconscious fears, measured directly from your brain. Imagine authoritarian governments monitoring emotional responses to propaganda, searching for dissent in citizens' brainwaves. This marks a critical moment for European policymakers.


Enterprises Are Prioritizing Generative AI Spending in 2025

The report, "Generative AI Adoption Index," highlights how organizations are moving gen AI from experimentation to full-scale implementation and offers practical strategies to create business value. CEOs, CTOs and CIOs currently lead most gen AI innovation, but leadership structures are evolving to include specialized AI roles, such as CAIOs, at the highest levels of organizations. ... Along with CAIOs, a thoughtful change management strategy will be critical. The ideal strategy should address operating model changes, data management practices and talent pipelines. Today, just 14% of organizations have a change management strategy, but this will increase to 76% by end of 2026, highlighting a growing recognition of the need for structured adaptation. But a sizable proportion of organizations may still struggle to keep pace with AI-driven transformation, with one in four organizations still lacking a strategy in 2026. ... Third-party vendors are becoming key enablers of gen AI transformation across organizations globally. From supplying outsourced talent to offering services such as cloud computing and storage, these vendors help bridge critical technology and talent gaps. Effective gen AI deployment will depend on strong collaboration between external experts and internal teams. 


AI’s rise demands more from the UK data center market

The growing demand for digital infrastructure, fueled by the surge in AI, has intensified competition for suitable land to build data centers. This scarcity (particularly in London), coupled with the rise in construction and operational costs, makes it difficult to establish data centers in the most efficient and cost-effective manner. Similarly, an over-reliance on well-established technology clusters (such as West London) can increase resource restraints and vulnerability to power outages and downtime. With UK policy frameworks around data centers still evolving, discussions are ongoing around security, energy consumption, and specific regulatory needs. ... Similarly, traditional methods demand a high level of energy consumption to keep AI chips operating at optimal temperatures. Given the energy-intensive nature of air cooling and it being unlikely to keep up with cooling demands, the data center industry is reaching a critical juncture: stifle the capabilities of AI technologies by not integrating effective thermal management, or investing in a more effective, future-thinking approach to cooling? ... The UK’s data center expansion is not just a scaling project, it is a rethinking of what data centers and associated cooling infrastructures must become. 


Why CISOs are making the SASE switch: Fewer vendors, smarter security, better AI guardrails

“SASE is an existential threat to all appliance-based network security companies,” Shlomo Kramer, Cato’s CEO, told VentureBeat. “The vast majority of the market is going to be refactored from appliances to cloud service, which means SASE [is going to be] 80% of the market.” A fundamental architectural transformation is driving that shift. SASE converges traditionally siloed networking and security functions into a single, cloud-native service edge. It combines SD-WAN with critical security capabilities, including secure web gateway (SWG), cloud access security broker (CASB) and ZTNA to enforce policy and protect data regardless of where users or workloads reside. ... The SASE consolidation wave reveals how enterprises are fundamentally rethinking security architecture. With AI attacks exploiting integration gaps instantly, single-vendor SASE has become essential for both protection and operational efficiency. The reasoning is straightforward. Every vendor handoff creates vulnerability. Each integration adds latency. Security leaders know that unified platforms can help eliminate these risks while enabling business velocity. CISOs are increasingly demanding a single console, a single agent and unified policies. 


CISOs urged to fix API risk before regulation forces their hand

The widespread use of APIs to support mobile apps, cloud services, and partner integrations means that the attack surface has changed. But the security practices often haven’t. APIs today handle everything from identity claims and cardholder data to health and account information. Yet in many organizations, they remain outside the scope of standard security programs. ... Oppenheim added that meaningful oversight at the board level doesn’t require technical fluency. “Board-level metrics in such a technically complex space can be difficult to surface meaningfully, but there are still effective ways to guide oversight and investment. Directors should ask which recognised standards (e.g. FAPI) have been adopted or are in the roadmap, and whether the organization has applied a maturity model or framework to benchmark its current posture and track improvements over time.” ... So far, the biggest improvements in API security have come either through direct regulation or industry-led mandates. But pressure is building elsewhere. “Again, organizational size plays a key role,” said Oppenheim. “Larger firms and infrastructure providers are already moving ahead voluntarily – not just in banking, but in payments and identity platforms – because they see strong API security as a necessary foundation for scale and trust.”

Daily Tech Digest - June 29, 2025


Quote for the day:

“Great minds discuss ideas; average minds discuss events; small minds discuss people.” -- Eleanor Roosevelt


Who Owns End-of-Life Data?

Enterprises have never been more focused on data. What happens at the end of that data's life? Who is responsible when it's no longer needed? Environmental concerns are mounting as well. A Nature study warns that AI alone could generate up to 5 million metric tons of e-waste by 2030. A study from researchers at Cambridge University and the Chinese Academy of Sciences said top reason enterprises dispose of e-waste rather than recycling computers is the cost. E-waste can contain metals, including copper, gold, silver aluminum and rare earth elements, but proper handling is expensive. Data security is a concern as well as breach proofing doesn't get better than destroying equipment. ... End-of-life data management may sit squarely in the realm of IT, but it increasingly pulls in compliance, risk and ESG teams, the report said. Driven by rising global regulations and escalating concerns over data leaks and breaches, C-level involvement at every stage signals that end-of-life data decisions are being treated as strategically vital - not simply handed off. Consistent IT participation also suggests organizations are well-positioned to select and deploy solutions that work with their existing tech stack. That said, shared responsibility doesn't guarantee seamless execution. Multiple stakeholders can lead to gaps unless underpinned by strong, well-communicated policies, the report said.


How AI is Disrupting the Data Center Software Stack

Over the years, there have been many major shifts in IT infrastructure – from the mainframe to the minicomputer to distributed Windows boxes to virtualization, the cloud, containers, and now AI and GenAI workloads. Each time, the software stack seems to get torn apart. What can we expect with GenAI? ... Galabov expects severe disruption in the years ahead on a couple of fronts. Take coding, for example. In the past, anyone wanting a new industry-specific application for their business might pay five figures for development, even if they went to a low-cost region like Turkey. For homegrown software development, the price tag would be much higher. Now, an LLM can be used to develop such an application for you. GenAI tools have been designed explicitly to enhance and automate several elements of the software development process. ... Many enterprises will be forced to face the reality that their systems are fundamentally legacy platforms that are unable to keep pace with modern AI demands. Their only course is to commit to modernization efforts. Their speed and degree of investment are likely to determine their relevance and competitive positioning in a rapidly evolving market. Kleyman believes that the most immediate pressure will fall on data-intensive, analytics-driven platforms such as CRM and business intelligence (BI). 


AI Improves at Improving Itself Using an Evolutionary Trick

The best SWE-bench agent was not as good as the best agent designed by expert humans, which currently scores about 70 percent, but it was generated automatically, and maybe with enough time and computation an agent could evolve beyond human expertise. The study is a “big step forward” as a proof of concept for recursive self-improvement, said Zhengyao Jiang, a cofounder of Weco AI, a platform that automates code improvement. Jiang, who was not involved in the study, said the approach could made further progress if it modified the underlying LLM, or even the chip architecture. DGMs can theoretically score agents simultaneously on coding benchmarks and also specific applications, such as drug design, so they’d get better at getting better at designing drugs. Zhang said she’d like to combine a DGM with AlphaEvolve. ... One concern with both evolutionary search and self-improving systems—and especially their combination, as in DGM—is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned.


Data center costs surge up to 18% as enterprises face two-year capacity drought

Smart enterprises are adapting with creative strategies. CBRE’s Magazine emphasizes “aggressive and long-term planning,” suggesting enterprises extend capacity forecasts to five or 10 years, and initiate discussions with providers much earlier than before. Geographic diversification has become essential. While major hubs price out enterprises, smaller markets such as São Paulo saw pricing drops of as much as 20.8%, while prices in Santiago fell 13.7% due to shifting supply dynamics. Magazine recommended “flexibility in location as key, exploring less-constrained Tier 2 or Tier 3 markets or diversifying workloads across multiple regions.” For Gogia, “Tier-2 markets like Des Moines, Columbus, and Richmond are now more than overflow zones, they’re strategic growth anchors.” Three shifts have elevated these markets: maturing fiber grids, direct renewable power access, and hyperscaler-led cluster formation. “AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance.


Fintech’s AI Obsession Is Useless Without Culture, Clarity and Control

what does responsible AI actually mean in a fintech context? According to PwC’s 2024 Responsible AI Survey, it encompasses practices that ensure fairness, transparency, accountability and governance throughout the AI lifecycle. It’s not just about reducing model bias — it’s about embedding human oversight, securing data, ensuring explainability and aligning outputs with brand and compliance standards. In financial services, these aren’t "nice-to-haves" — they’re essential for scaling AI safely and effectively. Financial marketing is governed by strict regulations and AI-generated content can create brand and legal risks. ... To move AI adoption forward responsibly, start small. Low-risk, high-reward use cases let teams build confidence and earn trust from compliance and legal stakeholders. Deloitte’s 2024 AI outlook recommends beginning with internal applications that use non-critical data — avoiding sensitive inputs like PII — and maintaining human oversight throughout. ... As BCG highlights, AI leaders devote 70% of their effort to people and process — not just technology. Create a cross-functional AI working group with stakeholders from compliance, legal, IT and data science. This group should define what data AI tools can access, how outputs are reviewed and how risks are assessed.


Is Microsoft’s new Mu for you?

Mu uses a transformer encoder-decoder design, which means it splits the work into two parts. The encoder takes your words and turns them into a compressed form. The decoder takes that form and produces the correct command or answer. This design is more efficient than older models, especially for tasks such as changing settings. Mu has 32 encoder layers and 12 decoder layers, a setup chosen to fit the NPU’s memory and speed limits. The model utilizes rotary positional embeddings to maintain word order, dual-layer normalization to maintain stability, and grouped-query attention to use memory more efficiently. ... Mu is truly groundbreaking because it is the first SLM built to let users control system settings using natural language, running entirely on a mainstream shipping device. Apple’s iPhones, iPads, and Macs all have a Neural Engine NPU and run on-device AI for features like Siri and Apple Intelligence. But Apple does not have a small language model as deeply integrated with system settings as Mu. Siri and Apple Intelligence can change some settings, but not with the same range or flexibility. ... By processing data directly on the device, Mu keeps personal information private and responds instantly. This shift also makes it easier to comply with privacy laws in places like Europe and the US since no data leaves your computer.


Is It a Good Time to Be a Software Engineer?

AI may be rewriting the rules of software development, but it hasn’t erased the thrill of being a programmer. If anything, the machines have revitalised the joy of coding. New tools make it possible to code in natural language, ship prototypes in hours, and bypass tedious setup work. From solo developers to students, the process may feel more immediate or rewarding. Yet, this sense of optimism exists alongside an undercurrent of anxiety. As large language models (LLMs) begin to automate vast swathes of development, some have begun to wonder if software engineering is still a career worth betting on. ... Meanwhile, Logan Thorneloe, a software engineer at Google, sees this as a golden era for developers. “Right now is the absolute best time to be a software engineer,” he wrote on LinkedIn. He points out “development velocity” as the reason. Thorneleo believes AI is accelerating workflows, shrinking prototype cycles from months to days, and giving developers unprecedented speed. Companies that adapt to this shift will win, not by eliminating engineers, but by empowering them. More than speed, there’s also a rediscovered sense of fun. Programmers who once wrestled with broken documentation and endless boilerplate are rediscovering the creative satisfaction that first drew them to the field. 


Dumping mainframes for cloud can be a costly mistake

Despite industry hype, mainframes are not going anywhere. They quietly support the backbone of our largest banks, governments, and insurance companies. Their reliability, security, and capacity for massive transactions give mainframes an advantage that most public cloud platforms simply can’t match for certain workloads. ... At the core of this conversation is culture. An innovative IT organization doesn’t pursue technology for its own sake. Instead, it encourages teams to be open-minded, pragmatic, and collaborative. Mainframe engineers have a seat at the architecture table alongside cloud architects, data scientists, and developers. When there’s mutual respect, great ideas flourish. When legacy teams are sidelined, valuable institutional knowledge and operational stability are jeopardized. A cloud-first mantra must be replaced by a philosophy of “we choose the right tool for the job.” The financial institution in our opening story learned this the hard way. They had to overcome their bias and reconnect with their mainframe experts to avoid further costly missteps. It’s time to retire the “legacy versus modern” conflict and recognize that any technology’s true value lies in how effectively it serves business goals. Mainframes are part of a hybrid future, evolving alongside the cloud rather than being replaced by it. 


Why Modern Data Archiving Is Key to a Scalable Data Strategy

Organizations are quickly learning they can’t simply throw all data, new and old, at an AI strategy; instead, it needs to be accurate, accessible, and, of course, cost-effective. Without these requirements in place, it’s far from certain AI-powered tools can deliver the kind of insight and reliability businesses need. As part of the various data management processes involved, archiving has taken on a new level of importance. ... For organizations that need to migrate data, for example, archiving is used to identify which essential datasets, while enabling users to offload inactive data in the most cost-effective way. This kind of win-win can also be applied to cloud resources, where moving data to the most appropriate service can potentially deliver significant savings. Again, this contrasts with tiering systems and NAS gateways, which rely on global file systems to provide cloud-based access to local files. The challenge here is that access is dependent on the gateway remaining available throughout the data lifecycle because, without it, data recall can be interrupted or cease entirely. ... It then becomes practical to strike a much better balance across the typical enterprise storage technology stack, including long-term data preservation and compliance, where data doesn’t need to be accessed so often, but where reliability and security are crucial.


The Impact of Regular Training and Timely Security Policy Changes on Dev Teams

Constructive refresher training drives continuous improvement by reinforcing existing knowledge while introducing new concepts like AI-powered code generation, automated debugging and cross-browser testing in manageable increments. Teams that implement consistent training programs see significant productivity benefits as developers spend less time struggling with unfamiliar tools and more time automating tasks to focus on delivering higher value. ... Security policies that remain static as teams grow create dangerous blind spots, compromising both the team’s performance and the organization’s security posture. Outdated policies fail to address emerging threats like malware infections and often become irrelevant to the team’s current workflow, leading to workarounds and system vulnerabilities. ... Proactive security integration into development workflows represents a fundamental shift from reactive security measures to preventative strategies. This approach enables growing teams to identify and address security concerns early in the development process, reducing the cost and complexity of remediation. Cultivating a security-first culture becomes increasingly important as teams grow. This involves embedding security considerations into various stages of the development life cycle. Early risk identification in cloud infrastructure reduces costly breaches and improves overall team productivity.

Daily Tech Digest - June 07, 2025


Quote for the day:

"Anger doesn't solve anything; it builds nothing but it can destroy everything" -- Lawrence Douglas Wilder


Software Testing Is at a Crossroads

Organizations are discovering that achieving meaningful quality improvements requires more than technological adoption; it demands fundamental changes in processes, skills, and organizational culture that many teams are still developing. ... There are numerous bottlenecks that are preventing teams from achieving their automation targets. "The test automation gap as we call it usually stems from three key challenges: limited skills, tooling constraints, and resource shortages," Crisóstomo said. He noted that smaller teams often struggle because they don't have enough experienced or specialized staff to take on complex automation work. At the same time, even well-resourced teams run into limitations with their current tools, many of which can't handle the increasing complexity of modern testing needs. "Across the board, nearly every team we surveyed cited bandwidth as a major issue," Crisóstomo said. "It's a classic catch-22: You need time to build automation so you can save time later, but competing priorities make it hard to invest that time upfront." ... "Meanwhile, AI-enhanced quality, particularly in testing and security, hasn't seen the same level of maturity or resources," he said. "That's starting to change, but many teams still see AI as more of a novelty than a business-critical tool for QA."


Empower Users and Protect Against GenAI Data Loss

When early software as a service tool emerged, IT teams scrambled to control the unsanctioned use of cloud-based file storage applications. The answer wasn't to ban file sharing though; rather it was to offer a secure, seamless, single-sign-on alternative that matched employee expectations for convenience, usability, and speed. However, this time around the stakes are even higher. With SaaS, data leakage often means a misplaced file. With AI, it could mean inadvertently training a public model on your intellectual property with no way to delete or retrieve that data once it's gone. ... Blocking traffic without visibility is like building a fence without knowing where the property lines are. We've solved problems like these before. Zscaler's position in the traffic flow gives us an unparalleled vantage point. We see what apps are being accessed, by whom and how often. This real-time visibility is essential for assessing risk, shaping policy and enabling smarter, safer AI adoption. Next, we've evolved how we deal with policy. Lots of providers will simply give the black-and-white options of "allow" or "block." The better approach is context-aware, policy-driven governance that aligns with zero-trust principles that assume no implicit trust and demand continuous, contextual evaluation. 


Too many cloud security tools harming incident response times - survey

According to the data, security teams are inundated with an average of 4,080 alerts each month regarding potential cloud-based incidents. However, in stark contrast, respondents reported experiencing just 7 actual security incidents per year. This enormous volume of alerts - compared to the small number of real threats - creates what ARMO describes as a very low signal-to-noise ratio. The survey found that security professionals typically need to sift through approximately 7,000 alerts to find a single active thread. The excessive "tool sprawl" has been cited as a primary factor: 63% of organisations surveyed reported using more than five cloud runtime security tools, yet only 13% were able to successfully correlate alerts across these systems. ... "Over the past few years we've seen rapid growth in the adoption of cloud runtime security tools to detect and prevent active cloud attacks and yet, there's a staggering disparity between alerts and actual security incidents. Without the critical context about asset sensitivity and exploitability needed to make sense of what is happening at runtime, as well as friction between SOC and Cloud Security, teams experience major delays in incident detection and response that negatively impacts performance metrics."


Giving People the Chance to Innovate Is Critical — ADP CDO

Recognizing that not all innovations start with a fully developed use case, Venjara shares how the team created a controlled sandbox environment. This allows internal teams to experiment securely without the risks of exposure to sensitive data. This sandbox setup, developed in collaboration with security, legal, and privacy teams, provides:A controlled environment for early experimentation; Technical safeguards to protect data; A pathway from ideation to formal review and production ... Another critical pillar in Venjara’s governance strategy is infrastructure. He highlights the development of an AI gateway that centralizes access to approved models and enables comprehensive monitoring. This gateway enables the team to monitor the health and usage of AI models, track input and output data, and govern use cases effectively at scale. Reflecting on internal innovation and culture-building, Venjara shares that it all starts with people and empowering them to explore, learn, and create. A foundational part of his approach is creating space for employees to take initiative, experiment, and bring new ideas to life. This culture of experimentation is paired with a clear articulation of expectations of what success looks like and how individuals can align with the broader mission.


Fortify Your Data Defense: Balancing Data Accessibility and Privacy

Companies need our data, and they usually place it into databases or datasets they can later reference. This makes privacy tricky. Twenty years ago, common rationale followed that removing direct identifiers such as names or street addresses from a dataset meant that dataset was anonymous. Unsurprisingly, we’ve since learned there is nothing anonymous about it. Data anonymization techniques like tokenization and pseudonymization, however, can minimize data exposure while still enabling these companies to perform valuable analytics such as data matching. By ensuring the data is never seen in the clear by another human while the system associates that data with a placeholder, it offers an extra layer of protection against threat actors even if they manage to exfiltrate the data. No one system or solution is perfect, but it’s important we continuously modernize our approach. Emerging technologies like homomorphic encryption, which allows mathematical functions on encrypted data, show promise for the future. Synthetic data, which generates fictional individuals with the same characteristics as real people, is another exciting development. Some companies are involving Chief Privacy Officers in their ranks, and there are whole countries building better frameworks.


Unleashing Powerful Cloud-Native Security Techniques

By leveraging NHI management, organizations can take a significant stride towards ensuring the safety of their cloud data and applications. This approach creates a robust security shield, defending against potential breaches and data leaks. By evolving their cyber strategies to include these powerful techniques, companies can ensure they remain secure and compliant where cyber threats are increasingly sophisticated and relentless. To unlock the full potential of NHIs, it’s vital to work with a partner who understands their dynamics deeply. This partner should offer a solution that caters to the entire lifecycle of NHIs, not just one aspect. Overall, for a truly secure cloud environment, consider NHI management as a fundamental component of your cloud-native security strategy. By embracing this paradigm shift, organizations can fortify themselves against the growing wave of cyber threats, ensuring a safer, more secure cloud journey. ... With a holistic, data-driven approach to NHI management, organizations can ensure that they are well-equipped to handle ever-evolving cyber threats. By establishing and maintaining a secure cloud, they are not only safeguarding their digital assets but also setting the stage for sustainable growth in digital transformation.


Global Digital Policy Roundup: May 2025

The roundup serves as a guide for navigating global digital policy based on the work of the Digital Policy Alert. To ensure trust, every finding links to the Digital Policy Alert entry with the official government source. The full Digital Policy Alert dataset is available for you to access, filter, and download. To stay updated, Digital Policy Alert also offers a customizable notification service that provides free updates on your areas of interest. Digital Policy Alert’s tools further allow you to navigate, compare, and chat with the legal text of AI rules across the globe. ... Content moderation, including the European Commission's DSA enforcement against adult content platforms, Australia's industry codes against age-inappropriate content, China's national network identity authentication measures, and Turkey's bill to repeal the internet regulation law. AI regulation, including the European Commission's AI Act implementation guidelines, Germany's court ruling on Meta's AI training practices, and China's deep synthesis algorithm registrations. Competition policy, including the European Commission's consultation on Microsoft Teams bundling, South Korea's enforcement actions against Meta and intermediary platform operators, China's private economy promotion law, and Brazil's digital markets regulation bill. 


The Greener Code: How real-time data is powering sustainable tech in India

As engineering leaders, we build systems that scale. But we must also ask: are they scaling sustainably? India’s data centres already consume around 2% of the country’s electricity, a number that’s only growing. If we don’t rethink our infrastructure, we risk trading digital progress for environmental cost. That’s where establishing real-time data pipelines reduces the need for batch jobs, temporary file storage, and unnecessary duplication of compute resources. This translates to less wasted computing power, lower carbon emissions, and a greener digital footprint. But it’s not just about saving energy. It’s about designing systems that are smart from the start, architecting not just for performance, but for the planet. ... India is uniquely positioned. A digital-first economy with deep tech talent, rising energy needs, and a growing commitment to sustainability. If we get it right, engineering systems that are both scalable and sustainable, we don’t just solve for India, we lead the world. From Digital India to Smart Cities to Make in India, the government is pushing for innovation. But innovation without sustainability is a short-term gain. What we need is “Sustainable Innovation” — and data streaming can and in fact will be a silent hero in that journey.


Measuring What Matters: The True Impact of Platform Teams

By consolidating tools and infrastructure, companies reduce costs and enhance productivity through automation, leading to faster time-to-market for new products. Improved reliability and compliance reduce potential revenue losses resulting from outages or regulatory violations, while also supporting business growth. To truly gauge the impact of platform teams, it’s essential to look beyond traditional metrics and consider the broader changes they bring to an organization. ... As my professional coaching training taught me, truly listening — not just hearing — is crucial. It’s about understanding everyone’s perspective and connecting intuitively to the real message, including what’s not being said. This level of listening, often referred to as “Level 3” or intuitive listening, involves paying attention to all sensory components: the speaker’s tone of voice, energy level, feelings, and even the silences between words. By practicing this deep, empathetic listening, leaders can create a profound connection with their team members, uncovering motivations, concerns, and ideas that might otherwise remain hidden. This approach not only enhances team happiness but also unlocks the full potential of the platform team, leading to more innovative solutions and stronger collaboration.


The New Fraud Frontier: Why Businesses Must Rethink Identity Verification

Now that fraudsters can access AI tools, the fraud game has entirely changed. Bad actors can generate synthetic identities, manipulate biometric data and even create deepfake videos to pass KYC processes. Additionally, AI enables fraudsters to test security systems at scale, quickly iterating and adapting methods based on system responses. In light of these new threats, businesses need dynamic solutions that can learn and evolve in real time. Ironically, the same technology serving sophisticated fraud can be our most potent defence. Using AI to enhance both pre-KYC and KYC processes delivers the capability to identify complex fraud patterns, adapting faster than human-driven systems ever could. ... The battle against AI-empowered fraud isn’t just about preventing financial losses. It’s about maintaining customer trust in an increasingly sceptical digital marketplace. Every fraudulent transaction erodes confidence, and that’s a cost too high to bear in today’s competitive landscape. Businesses that take a multi-layered approach, integrating pre-KYC and KYC processes in a unified fraud prevention strategy, can stake one step ahead of fraudsters. The key is ensuring that fraud prevention tools – data-rich, AI-driven and flexible – are as adaptive as the threats they are designed to stop.

Daily Tech Digest - February 11, 2025


Quote for the day:

"Your worth consists in what you are and not in what you have." -- Thomas Edison


Protecting Your Software Supply Chain: Assessing the Risks Before Deployment

Given the vast number of third-party components used in modern IT, it's unrealistic to scrutinize every software package equally. Instead, security teams should prioritize their efforts based on business impact and attack surface exposure. High-privilege applications that frequently communicate with external services should undergo product security testing, while lower-risk applications can be assessed through automated or less resource-intensive methods. Whether done before deployment or as a retrospective analysis, a structured approach to PST ensures that organizations focus on securing the most critical assets first while maintaining overall system integrity. ... While Product Security Testing will never prevent a breach of a third party out of your control, it is necessary to allow organizations to make informed decisions about their defensive posture and response strategy. Many organizations follow a standard process of identifying a need, selecting a product, and deploying it without a deep security evaluation. This lack of scrutiny can leave them scrambling to determine the impact when a supply chain attack occurs. By incorporating PST into the decision-making process, security teams gain critical documentation, including dependency mapping, threat models, and specific mitigations tailored to the technology in use. 


Google’s latest genAI shift is a reminder to IT leaders — never trust vendor policy

Entities out there doing things you don’t like are always going to be able to get generative AI (genAI) services and tools from somebody. You think large terrorist cells can’t use their money to pay somebody to craft LLMs for them? Even the most powerful enterprises can’t stop it from happening. But, that may not be the point. Walmart, ExxonMobil, Amazon, Chase, Hilton, Pfizer and Toyota and the rest of those heavy-hitters merely want to pick and choose where their monies are spent. Big enterprises can’t stop AI from being used to do things they don’t like, but they can make sure none of it is being funded with their money. If they add a clause to every RFP that they will only work with model-makers that agree to not do X, Y, or Z, that will get a lot of attention. The contract would have to be realistic, though. It might say, for instance, “If the model-maker later chooses to accept payments for the above-described prohibited acts, they must reimburse all of the dollars we have already paid and must also give us 18 months notice so that we can replace the vendor with a company that will respect the terms of our contracts.” From the perspective of Google, along with Microsoft, OpenAI, IBM, AWS and others, the idea is to take enterprise dollars on top of government contracts. 


Is Fine-Tuning or Prompt Engineering the Right Approach for AI?

It’s not just about having access to GPUs — it’s about getting the most out of proprietary data with new tools that make fine-tuning easier. Here’s why fine-tuning is gaining traction:Better results with proprietary data: Fine-tuning allows businesses to train models on their own data, making the AI much more accurate and relevant to their specific tasks. This leads to better outcomes and real business value. Easier than ever before: Tools like Hugging Face’s Open Source libraries, PyTorch and TensorFlow, along with cloud services, have made fine-tuning more accessible. These frameworks simplify the process, even for teams without deep AI expertise. Improved infrastructure: The rising availability of powerful GPUs and cloud-based solutions has made it much easier to set up and run fine-tuning at scale. While fine-tuning opens the door to more customized AI, it does require careful planning and the right infrastructure to succeed. ... As enterprises accelerate their AI adoption, choosing between prompt engineering and fine-tuning will have a significant impact on their success. While prompt engineering provides a quick, cost-effective solution for general tasks, fine-tuning unlocks the full potential of AI, enabling superior performance on proprietary data.


Shifting left without slowing down

On the one hand, automation enabled by GenAI tools in software development is driving unprecedented developer productivity, further emphasizing the gap created by manual application security controls, like security reviews or threat modeling. But in parallel, recent advancements in code understanding enabled by these technologies, together with programmatic policy-as-code security policies, enable a giant leap in the value security automation can bring. ... The first step is recognizing security as a shared responsibility across the organization, not just a specialized function. Equipping teams with automated tools and clear processes helps integrate security into everyday workflows. Establishing measurable goals and metrics to track progress can also provide direction and accountability. Building cross-functional collaboration between security and development teams sets the foundation for long-term success. ... A common pitfall is treating security as an afterthought, leading to disruptions that strain teams and delay releases. Conversely, overburdening developers with security responsibilities without proper support can lead to frustration and neglect of critical tasks. Failure to adopt automation or align security goals with development objectives often results in inefficiency and poor outcomes. 


How To Approach API Security Amid Increasing Automated Attack Sophistication

We’ve now gone from ‘dumb’ attacks—for example, web-based attacks focused on extracting data from third parties and on a specific or single vulnerability—to ‘smart’ AI-driven attacks often involving picking an actual target, resulting in a more focused attack. Going after a particular organization, perhaps a large organization or even a nation-state, instead of looking for vulnerable people is a significant shift. The sophistication is increasing as attackers manipulate request payloads to trick the backend system into an action. ... Another element of API security is being aware of sensitive data. Personal Identifiable Information (PII) is moving through APIs constantly and is vulnerable to theft or data exfiltration. Organizations do not often pay attention to vulnerabilities. Still, they pay attention when the result is damage to their organization through leaked PII, stolen finances, or brand reputation. ... The security teams know the network systems and the infrastructure well but don't understand the application behaviors. The DevOps team tends to own the applications but doesn’t see anything in production. This split boundary in most organizations makes it ripe for exploitation. Many data exfiltration cases fall in this no man’s land since an authenticated user executes most incidents.


Top 5 ways attackers use generative AI to exploit your systems

Gen AI tools help criminals pull together different sources of data to enrich their campaigns — whether this is group social profiling, or targeted information gleaned from social media. “AI can be used to quickly learn what types of emails are being rejected or opened, and in turn modify its approach to increase phishing success rate,” Mindgard’s Garraghan explains. ... The traditionally difficult task of analyzing systems for vulnerabilities and developing exploits can be simplified through use of gen AI technologies. “Instead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,” Mingard’s Garraghan says. ... “This sharp decrease strongly indicates that a major technological advancement — likely GenAI — is enabling threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest writes. ... Check Point Research explains: “While ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones — individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.”


Why firewalls and VPNs give you a false sense of security

VPNs and firewalls play a crucial role in extending networks, but they also come with risks. By connecting more users, devices, locations, and clouds, they inadvertently expand the attack surface with public IP addresses. This expansion allows users to work remotely from anywhere with an internet connection, further stretching the network’s reach. Moreover, the rise of IoT devices has led to a surge in Wi-Fi access points within this extended network. Even seemingly innocuous devices like Wi-Fi-connected espresso machines, meant for a quick post-lunch pick-me-up, contribute to the proliferation of new attack vectors that cybercriminals can exploit. ... More doesn’t mean better when it comes to firewalls and VPNs. Expanding a perimeter-based security architecture rooted in firewalls and VPNs means more deployments, more overhead costs, and more time wasted for IT teams – but less security and less peace of mind. Pain also comes in the form of degraded user experience and satisfaction with VPN technology for the entire organization due to backhauling traffic. Other challenges like the cost and complexity of patch management, security updates, software upgrades, and constantly refreshing aging equipment as an organization grows are enough to exhaust even the largest and most efficient IT teams.


Building Trust in AI: Security and Risks in Highly Regulated Industries

AI hallucinations have emerged as a critical problem, with systems generating plausible but incorrect information - for instance, AI fabricated software dependencies, such as PyTorture, leading to potential security risks. Hackers could exploit these hallucinations by creating malicious components masquerading as real ones. In another case, an AI libelously fabricated an embezzlement claim, resulting in legal action - marking the first time AI was sued for libel. Security remains a pressing concern, particularly with plugins and software supply chains. A ChatGPT plugin once exposed sensitive data due to a flaw in its OAuth mechanism, and incidents like PyTorch’s vulnerable release over Christmas demonstrate the risks of system exploitation. Supply chain vulnerabilities affect all technologies, while AI-specific threats like prompt injection allow attackers to manipulate outputs or access sensitive prompts, as seen in Google Gemini. ... Organizations can enhance their security strategies by utilizing frameworks like Google’s Secure AI Framework (SAIF). These frameworks highlight security principles, including access control, detection and response systems, defense mechanisms, and risk-aware processes tailored to meet specific business needs.


When LLMs become influencers

Our ability to influence LLMs is seriously circumscribed. Perhaps if you’re the owner of the LLM and associated tool, you can exert outsized influence on its output. For example, AWS should be able to train Amazon Q to answer questions, etc., related to AWS services. There’s an open question as to whether Q would be “biased” toward AWS services, but that’s almost a secondary concern. Maybe it steers a developer toward Amazon ElastiCache and away from Redis, simply by virtue of having more and better documentation and information to offer a developer. The primary concern is ensuring these tools have enough good training data so they don’t lead developers astray. ... Well, one option is simply to publish benchmarks. The LLM vendors will ultimately have to improve their output or developers will turn to other tools that consistently yield better results. If you’re an open source project, commercial vendor, or someone else that increasingly relies on LLMs as knowledge intermediaries, you should regularly publish results that showcase those LLMs that do well and those that don’t. Benchmarking can help move the industry forward. By extension, if you’re a developer who increasingly relies on coding assistants like GitHub Copilot or Amazon Q, be vocal about your experiences, both positive and negative. 


Deepfakes: How Deep Can They Go?

Metaphorically, spotting deepfakes is like playing the world’s most challenging game of “spot the difference.” The fakes have become so sophisticated that the inconsistencies are often nearly invisible, especially to the untrained eye. It requires constant vigilance and the ability to question the authenticity of audiovisual content, even when it looks or sounds completely convincing. Recognizing threats and taking decisive actions are crucial for mitigating the effects of an attack. Establishing well-defined policies, reporting channels, and response workflows in advance is imperative. Think of it like a citywide defense system responding to incoming missiles. Early warning radars (monitoring) are necessary to detect the threat; anti-missile batteries (AI scanning) are needed to neutralize it; and emergency services (incident response) are essential to quickly handle any impacts. Each layer works in concert to mitigate harm. ... If a deepfake attack succeeds, organizations should immediately notify stakeholders of the fake content, issue corrective statements, and coordinate efforts to remove the offending content. They should also investigate the source, implement additional verification measures, and provide updates to rebuild trust and consider legal action.