Daily Tech Digest - July 10, 2025


Quote for the day:

"Strive not to be a success, but rather to be of value." -- Albert Einstein


Domain-specific AI beats general models in business applications

Like many AI teams in the mid-2010s, Visma’s group initially relied on traditional deep learning methods such as recurrent neural networks (RNNs), similar to the systems that powered Google Translate back in 2015. But around 2020, the Visma team made a change. “We scrapped all of our development plans and have been transformer-only since then,” says Claus Dahl, Director ML Assets at Visma. “We realized transformers were the future of language and document processing, and decided to rebuild our stack from the ground up.” ... The team’s flagship product is a robust document extraction engine that processes documents in the countries where Visma companies are active. It supports a variety of languages. The AI could be used for documents such as invoices and receipts. The engine identifies key fields, such as dates, totals, and customer references, and feeds them directly into accounting workflows. ... “High-quality data is more valuable than high volumes. We’ve invested in a dedicated team that curates these datasets to ensure accuracy, which means our models can be fine-tuned very efficiently,” Dahl explains. This strategy mirrors the scaling laws used by large language models but tailors them for targeted enterprise applications. It allows the team to iterate quickly and deliver high performance in niche use cases without excessive compute costs.


The case for physical isolation in data centre security

Hardware-enforced physical isolation is fast becoming a cornerstone of modern cybersecurity strategy. These physical-layer security solutions allow your critical infrastructure – servers, storage and network segments – to be instantly disconnected on demand, using secure, out-of-band commands. This creates a last line of defence that holds even when everything else fails. After all, if malware can’t reach your system, it can’t compromise it. If a breach does occur, physical segmentation contains it in milliseconds, stopping lateral movement and keeping operations running without disruption. In stark contrast to software-only isolation, which relies on the very systems it seeks to protect, hardware isolation remains immune to tampering. ... When ransomware strikes, every second counts. In a colocation facility, traditional defences might flag the breach, but not before it worms its way across tenants. By the time alerts go out, the damage is done. With hardware isolation, there’s no waiting: the compromised tenant can be physically disconnected in milliseconds, before the threat spreads, before systems lock up, before wallets and reputations take a hit. What makes this model so effective is its simplicity. In an industry where complexity is the norm, physical isolation offers a simple, fundamental truth: you’re either connected or you’re not. No grey areas. No software dependency. Just total certainty.


Scaling without outside funding: Intuitive's unique approach to technology consulting

We think for any complex problem, a good 60–70% of it can be solved through innovation. That's always our first principle. Then where we see any inefficiencies; be it in workflows or process, automation works for the other 20% of the friction. The remaining 10–20% is where the engineering plays its important role, and it allows to touch on the scale, security and governance aspects. In data specifically, we are referencing the last 5–6 years of massive investments. We partner with platforms like Databricks and DataMiner and we've invested in companies like TESL and Strike AI for securing their AI models. ... In the cloud space, we see a shift from migration to modernisation (and platform engineering). Enterprises are focussing on modernisation of both applications and databases because those are critical levers of agility, security, and business value. In AI it is about data readiness; the majority of enterprise data is very fragmented or very poor quality which makes any AI effort difficult. Next is understanding existing processes—the way work is done at scale—which is critical for enabling GenAI. But the true ROI is Agentic AI—autonomous systems which don’t just tell you what to do, but just do it. We’ve been investing heavily in this space since 2018. 


The Future of Professional Ethics in Computing

Recent work on ethics in computing has focused on artificial intelligence (AI) with its success in solving problems, processing large amounts of data, and with the award of Nobel Prizes to AI researchers. Large language models and chatbots such as ChatGPT suggest that AI will continue to develop rapidly, acquire new capabilities, and affect many aspects of human existence. Many of the issues raised in the ethics of AI overlap previous discussions. The discussion of ethical questions surrounding AI is reaching a much broader audience, has more societal impact, and is rapidly transitioning to action through guidelines and the development of organizational structure, regulation, and legislation. ... Ethics of digital technologies in modern societies raises questions that traditional ethical theories find difficult to answer. Current socio-technical arrangements are complex ecosystems with a multitude of human and non-human stakeholders, influences, and relationships. The questions of ethics in ecosystems include: Who are members? On what grounds are decisions made and how are they implemented and enforced? Which normative foundations are acceptable? These questions are not easily answered. Computing professionals have important contributions to make to these discussions and should use their privileges and insights to help societies navigate them.


AI Agents Vs RPA: What Every Business Leader Needs To Know

Technically speaking, RPA isn’t intelligent in the same way that we might consider an AI system like ChatGPT to mimic some functions of human intelligence. It simply follows the same rules over and over again in order to spare us the effort of doing it. RPA works best with structured data because, unlike AI, it doesn't have the ability to analyze and understand unstructured data, like pictures, videos, or human language. ... AI agents, on the other hand, use language models and other AI technologies like computer vision to understand and interpret the world around them. As well as simply analyzing and answering questions about data, they are capable of taking action by planning how to achieve the results they want and interacting with third-party services to get it done. ... Using RPA, it would be possible to extract details about who sent the mail, the subject line, and the time and date it was sent. This can be used to build email databases and broadly categorize emails according to keywords. An agent, on the other hand, could analyze the sentiment of the email using language processing, prioritize it according to urgency, and even draft and send a tailored response. Over time, it learns how to improve its actions in order to achieve better resolutions.


How To Keep AI From Making Your Employees Stupid

Treat AI-generated content like a highly caffeinated first draft – full of energy, but possibly a little messy and prone to making things up. Your job isn’t to just hit “generate” and walk away unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss (or worse, your audience). Always, always edit aggressively, proofread and, most critically, fact-check every single output. This process isn’t just about catching AI’s mistakes; it actively engages your critical thinking skills, forcing you to verify information and refine expression. Think of it as intellectual calisthenics. ... Don’t settle for the first answer AI gives you. Engage in a dialogue. Refine your prompts, ask follow-up questions, request different perspectives and challenge its assumptions. This iterative process of refinement forces you to think more clearly about your own needs, to be precise in your instructions, and to critically evaluate the nuances of the AI’s response. ... The MIT study serves as a crucial wake-up call: over-reliance on AI can indeed make us “stupid” by atrophying our critical thinking skills. However, the solution isn’t to shun AI, but to engage with it intelligently and responsibly. By aggressively editing, proofreading and fact-checking AI outputs, by iteratively refining prompts and by strategically choosing the right AI tool for each task, we can ensure AI serves as a powerful enhancer, not a detrimental crutch.


What EU’s PQC roadmap means on the ground

The EU’s PQC roadmap is broadly aligned with that from NIST; both advise a phased migration to PQC with hybrid-PQC ciphers and hybrid digital certificates. These hybrid solutions provide the security promises of brand new PQC algorithms, whilst allowing legacy devices that do not support them, to continue using what’s now being called ‘classical cryptography’. In the first instance, both the EU and NIST are recommending that non-PQC encryption is removed by 2030 for critical systems, with all others following suit by 2035. While both acknowledge the ‘harvest now, decrypt later’ threat, neither emphasise the importance of understanding the cover time of data; nor reference the very recent advancements in quantum computing. With many now predicting the arrival of cryptographically relevant quantum computers (CRQC) by 2030, if organizations or governments have information with a cover time of five years or more, it is already too late for many to move to PQC in time. Perhaps the most significant difference that EU organizations will face compared to their American counterparts, is that the European roadmap is more than just advice; in time it will be enforced through various directives and regulations. PQC is not explicitly stated in EU regulations, although that is not surprising.


The trillion-dollar question: Who pays when the industry’s AI bill comes due?

“The CIO is going to be very, very busy for the next three, four years, and that’s going to be the biggest impact,” he says. “All of a sudden, businesspeople are starting to figure out that they can save a ton of money with AI, or they can enable their best performers to do the actual job.” Davidov doesn’t see workforce cuts matching AI productivity increases, even though some job cuts may be coming. ... “The costs of building out AI infrastructure will ultimately fall to enterprise users, and for CIOs, it’s only a question of when,” he says. “While hyperscalers and AI vendors are currently shouldering much of the expense to drive adoption, we expect to see pricing models evolve.” Bhathena advises CIOs to look beyond headline pricing because hidden costs, particularly around integrating AI with existing legacy systems, can quickly escalate. Organizations using AI will also need to invest in upskilling employees and be ready to navigate increasingly complex vendor ecosystems. “Now is the time for organizations to audit their vendor agreements, ensure contract flexibility, and prepare for potential cost increases as the full financial impact of AI adoption becomes clearer,” he says. ... Baker advises CIOs to be careful about their purchases of AI products and services and tie new deployments to business needs.


Multi-Cloud Adoption Rises to Boost Control, Cut Cost

Instead of building everything on one platform, IT leaders are spreading out their workloads, said Joe Warnimont, senior analyst at HostingAdvice. "It's no longer about chasing the latest innovation from a single provider. It's about building a resilient architecture that gives you control and flexibility for each workload." Cost is another major factor. Even though hyperscalers promote their pay-as-you-go pricing, many enterprises find it difficult to predict and manage costs at scale. This is true for companies running hundreds or thousands of workloads across different regions and teams. "You'd think that pay-as-you-go would fit any business model, but that's far from the case. Cost predictability is huge, especially for businesses managing complex budgets," Warnimont said. To gain more control over pricing and features, companies are turning to alternative cloud providers, such as DigitalOcean, Vultr and Backblaze. These platforms may not have the same global footprint as AWS or Azure but they offer specialized services, better pricing and flexibility for certain use cases. An organization needing specific development environments may go to DigitalOcean. Another may chose Vultr for edge computing. Sometimes the big players just don't offer what a specific workload requires. 


How CISOs are training the next generation of cyber leaders

While Abousselham champions a personalized, hands-on approach to developing talent, other CISOs are building more formal pathways to support emerging leaders at scale. For others like PayPal CISO Shaun Khalfan, structured development was always part of his career. He participated in formal leadership training programs offered by the Department of Defense and those run by the American Council for Technology. ... Structured development is also happening inside companies like the insurance brokerage firm Brown & Brown. CISO Barry Hensley supports an internal cohort program designed to identify and grow emerging leaders early in their careers. “We look at our – I’m going to call it newer or younger – employees,” he explains. “And if you become recognized in your first, second, or third year as having the potential to [become a leader], you get put in a program,” he explains. ... Khalfan believes good CISOs should be able to dive deep with engineers while also leading boardroom conversations. “It’s been a long time since I’ve written code,” he says, “but I at least understand how to have a deep conversation and also be able to have a board discussion with someone.” Abousselham agrees that technical experience is only one part of the puzzle. 

Daily Tech Digest - July 09, 2025


Quote for the day:

"Whenever you see a successful person you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah


Why CIOs see APIs as vital for agentic AI success

API access also goes beyond RAG. It allows agents and their underlying language models not just to retrieve information, but perform database mutations and trigger external actions. This shift allows agents to carry out complex, multi-step workflows that once required multiple human touchpoints. “AI-ready APIs paired with multi-agentic capabilities can unlock a broad range of use cases, which have enterprise workflows at their heart,” says Milind Naphade, SVP of technology and head of AI foundations at Capital One. In addition, APIs are an important bridge out of previously isolated AI systems. ... AI agents can make unprecedented optimizations on the fly using APIs. Gartner reports that PC manufacturer Lenovo uses a suite of autonomous agents to optimize marketing and boost conversions. With the oversight of a planning agent, these agents call APIs to access purchase history, product data, and customer profiles, and trigger downstream applications in the server configuration process. ... But the bigger wins will likely be increased operational efficiency and cost reduction. As Fox describes, this stems from a newfound best-of-breed business agility. “When agentic AI can dynamically reconfigure business processes, using just what’s needed from the best-value providers, you’ll see streamlined operations, reduced complexity, and better overall resource allocation,” she says.


What we can learn about AI from the ‘dead internet theory’

The ‘dead internet theory,’ or the idea that much of the web is now dominated by bots and AI-generated content, is largely speculative. However, the concern behind it is worth taking seriously. The internet is changing, and the content that once made it a valuable source of knowledge is increasingly diluted by duplication, misinformation, and synthetic material. For the development of artificial intelligence, especially large language models (LLMs), this shift presents an existential problem. ... One emerging model for collecting and maintaining this kind of data is Knowledge as a Service (KaaS). Rather than scraping static sources, KaaS creates a living, structured ecosystem of contributions from real users (often experts in their fields) who continuously validate and update content. This approach takes inspiration from open-source communities but remains focused on knowledge creation and maintenance rather than code. KaaS supports AI development with a sustainable, high-quality stream of data that reflects current thinking. It’s designed to scale with human input, rather than in spite of it. ... KaaS helps AI stay relevant by providing fresh, domain-specific input from real users. Unlike static datasets, KaaS adapts as conditions change. It also brings greater transparency, illustrating directly how contributors’ inputs are utilised. This level of attribution represents a step toward more ethical and accountable AI.


The Value of Threat Intelligence in Ensuring DORA Compliance

One of the biggest challenges for security teams today is securing visibility into third-party providers within their ecosystem due to their volume, diversity, and the constant monitoring required. Utilising a Threat Intelligence Platform (TIP) with advanced capabilities can enable a security team to address this gap by monitoring and triaging threats within third-party systems through automation. It can flag potential signs of compromise, vulnerabilities, and risky behaviour, enabling organisations to take pre-emptive action before risks escalate and impact their systems. ... A major aspect of DORA is implementing a robust risk management framework. However, to keep pace with global expansion and new threats and technologies, this framework must be responsive, flexible, and up-to-date. Sourcing, aggregating, and collating threat intelligence data to facilitate this is a time-exhaustive task, and unfeasible for many resource-stretched and siloed security teams. ... From tabletop scenarios to full-scale simulations, these exercises evaluate how well systems, processes, and people can withstand and respond to real-world cyber threats. With an advanced TIP, security teams can leverage customisable workflows to recreate specific operational stress scenarios. These scenarios can be further enhanced by feeding real-world data on attacker behaviours, tactics, and trends, ensuring that simulations reflect actual threats rather than outdated risks.


Why your security team feels stuck

The problem starts with complexity. Security stacks have grown dense, and tools like EDR, SIEM, SOAR, CASB, and DSPM don’t always integrate well. Analysts often need to jump between multiple dashboards just to confirm whether an alert matters. Tuning systems properly takes time and resources, which many teams don’t have. So alerts pile up, and analysts waste energy chasing ghosts. Then there’s process friction. In many organizations, security actions, especially the ones that affect production systems, require multiple levels of approval. On paper, that’s to reduce risk. But these delays can mean missing the window to contain an incident. When attackers move in minutes, security teams shouldn’t be stuck waiting for a sign-off. ... “Security culture is having a bit of a renaissance. Each member of the security team may be in a different place as we undertake this transformation, which can cause internal friction. In the past, security was often tasked with setting and enforcing rules in order to secure the perimeter and ensure folks weren’t doing risky things on their machines. While that’s still part of the job, security and privacy teams today also need to support business growth while protecting customer data and company assets. If business growth is the top priority, then security professionals need new tools and processes to secure those assets.”


Your data privacy is slipping away. Here's why, and what you can do about it

In 2024, the Identity Theft Resource Center reported that companies sent out 1.3 billion notifications to the victims of data breaches. That's more than triple the notices sent out the year before. It's clear that despite growing efforts, personal data breaches are not only continuing, but accelerating. What can you do about this situation? Many people think of the cybersecurity issue as a technical problem. They're right: Technical controls are an important part of protecting personal information, but they are not enough. ... Even the best technology falls short when people make mistakes. Human error played a role in 68% of 2024 data breaches, according to a Verizon report. Organizations can mitigate this risk through employee training, data minimization—meaning collecting only the information necessary for a task, then deleting it when it's no longer needed—and strict access controls. Policies, audits and incident response plans can help organizations prepare for a possible data breach so they can stem the damage, see who is responsible and learn from the experience. It's also important to guard against insider threats and physical intrusion using physical safeguards such as locking down server rooms. ... Despite years of discussion, the U.S. still has no comprehensive federal privacy law. Several proposals have been introduced in Congress, but none have made it across the finish line. 


How To Build Smarter Factories With Edge Computing

According to edge computing experts, these are essentially rugged versions of computers, of any size, purpose-built for their harsh environments. Forget standard form factors; industrial edge devices come in varied configurations specific to the application. This means a device shaped to fit precisely where it’s needed, whether tucked inside a machine or mounted on a factory wall. ... What makes these tough machines intelligent? It’s the software revolution happening on factory floors right now. Historically, industrial computing relied on software specially built to run on bare metal; custom code directly installed on specific machines. While this approach offered reliability and consistent, deterministic performance, it came with significant limitations: slow development cycles, difficult updates and vendor lock-in. ... Communication between smart devices presents unique challenges in industrial environments. Traditional networking approaches often fall short when dealing with thousands of sensors, robots and automated systems. Standard Wi-Fi faces significant constraints in factories where heavy machinery creates electromagnetic interference, and critical operations can’t tolerate wireless dropouts.


Fighting in a cloudy arena

“There are a few primary problems. Number one is that the hyperscalers leverage free credits to get digital startups to build their entire stack on their cloud services,” Cochrane says, adding that as the startups grow, the technical requirements from hyperscalers leave them tied to that provider. “The second thing is also in the relationship they have with enterprises. They say, ‘Hey, we project you will have a $250 million cloud bill, we are going to give you a discount.’ Then, because the enterprise has a contractual vehicle, there’s a mad rush to use as much of the hyperscalers compute as possible because you either lose it or use it. “At the end of the day, it’s like the roach motel. You can check in, but you can’t check out,” he sums up. ... "We are exploring our options to continue to fight against Microsoft’s anti competitive licensing in order to promote choice, innovation, and the growth of the digital economy in Europe." Mark Boost, CEO of UK cloud company Civo, said: ”However they position it, we cannot shy away from what this deal appears to be: a global powerful company paying for the silence of a trade body, and avoiding having to make fundamental changes to their software licensing practices on a global basis.” In the months that followed this decision, things got interesting.


How passkeys work: The complete guide to your inevitable passwordless future

Passkeys are often described as a passwordless technology. In order for passwords to work as a part of the authentication process, the website, app, or other service -- collectively referred to as the "relying party" -- must keep a record of that password in its end-user identity management system. This way, when you submit your password at login time, the relying party can check to see if the password you provided matches the one it has on record for you. The process is the same, whether or not the password on record is encrypted. In other words, with passwords, before you can establish a login, you must first share your secret with the relying party. From that point forward, every time you go to login, you must send your secret to the relying party again. In the world of cybersecurity, passwords are considered shared secrets, and no matter who you share your secret with, shared secrets are considered risky. ... Many of the largest and most damaging data breaches in history might not have happened had a malicious actor not discovered a shared password. In contrast, passkeys also involve a secret, but that secret is never shared with a relying party. Passkeys are a form of Zero Knowledge Authentication (ZKA). The relying party has zero knowledge of your secret, and in order to sign in to a relying party, all you have to do is prove to the relying party that you have the secret in your possession.


Crafting a compelling and realistic product roadmap

The most challenging aspect of roadmap creation is often prioritization. Given finite resources, not everything can be built at once. Effective prioritization requires a clear framework. Common methods include scoring features based on business value versus effort, using frameworks like RICE, or focusing on initiatives that directly address key strategic objectives. Be prepared to say “no” to good ideas that don’t align with current priorities. Transparency in this process is vital. Communicate why certain items are prioritized over others to stakeholders, fostering understanding and buy-in, even when their preferred feature isn’t immediately on the roadmap. ... A product roadmap is a living document, not a static contract. The B2B software landscape is constantly evolving, with new technologies emerging, customer needs shifting, and competitive pressures mounting. A realistic roadmap acknowledges this dynamism. While it provides a clear direction, it should also be adaptable. Plan for regular reviews and updates – quarterly or even monthly – to adjust based on new insights, validated learnings, and changes in the market or business environment. Embrace iterative development and be prepared to pivot or adjust priorities as new information comes to light. 


Are software professionals ready for the AI tsunami?

Modern AI assistants can translate plain-English prompts into runnable project skeletons or even multi-file apps aligned with existing style guides (e.g., Replit). This capability accelerates experimentation and learning, especially when teams are exploring unfamiliar technology stacks. A notable example is MagicSchool.com, a real-world educational platform created using AI-assisted coding workflows, showcasing how AI can powerfully convert conceptual prompts into usable products. These tools enable rapid MVP development that can be tested directly with customers. Once validated, the MVP can then be scaled into a full-fledged product. Rapid code generation can lead to fragile or opaque implementations if teams skip proper reviews, testing, and documentation. Without guardrails, it risks technical debt and poor maintainability. To stay reliable, agile teams must pair AI-generated code with sprint reviews, CI pipelines, automated testing, and strategies to handle evolving features and business needs. Recognising the importance of this shift, tech giants like Amazon (CodeWhisperer) and Google (AlphaCode) are making significant investments in AI development tools, signaling just how central this approach is becoming to the future of software engineering.

Daily Tech Digest - July 08, 2025


Quote for the day:

“If you really want the key to success, start by doing the opposite of what everyone else is doing.” -- Brad Szollose


MCP Vulnerability Exposes the AI Untrusted Code Crisis

Most organizations have rigorous approval processes before allowing arbitrary code to run in their environments whether from open source projects or vendor solutions. Yet with this new wave of tools, we’re simultaneously allowing thousands of employees to constantly update codebases with arbitrary, untrusted AI-generated code or wiring said codebases and applications to mechanisms that can alter or modify their behavior. This isn’t about stopping the use of AI coding agents or sacrificing the massive productivity gains they provide. Instead, we should standardize better ways that allow us to run untrusted code across our software development pipelines. ... As AI development tools gain adoption across enterprises, there is a new class of systems to support them that can execute code on behalf of developers. This includes AI code assistants generating and running code snippets, MCP servers providing AI systems access to local tools and data, automated testing tools executing AI-generated test cases and development agents performing complex multistep operations. Each of these represents a potential code execution pathway that often bypasses traditional security controls. The risk isn’t just that AI-generated code can be inadvertently malicious; it’s that these new systems also create pathways for untrusted code execution.


Is English the next programming language? JetBrains’ CEO says no

JetBrains does need to contend with the fact that many of its users are being threatened by AI replacing them, even if he notes that job displacement isn’t happening at anywhere near the rate some have suggested. Products, languages and IT infrastructure can indeed be made redundant too. We may also add that many layoff rounds use AI as an excuse to make cuts that are simply financially motivated. Still, we need to appreciate that AI is indeed changing the overall landscape. Tasks can be automated, and AI is eagerly shoveling up the developer code that’s freely available online. What about Kotlin specifically?  ... “Here’s my vision. I think programming languages will evolve a lot. I admit that you may not need high level programming languages in the classical sense anymore, but the solution still wouldn’t be English.” Skrygan envisions a middle ground between Kotlin and natural language. Currently, the closest approximation is Kotlin DSL. It’s a design doc that can be compiled as code. Ultimately, like anything digital, it converts into binary at the lowest level. The JetBrains CEO highlights how this is merely a repeat of what we’ve already seen: “People were writing in bytecode and assembler 40 years ago. Now, nobody cares about it anymore. It’s secondary.”

Privacy is blockchain’s missing link—and America’s opportunity to lead

We are at an inflection point. On one hand, blockchain has evolved from an experimental idea into a foundational layer for decentralized finance (DeFi), gaming, cross-border payments, and digital identity. On the other, the absence of privacy threatens to stall its momentum. Without privacy guarantees, Web3 won’t scale into a secure, inclusive internet economy—it will remain a risky, self-surveilling shadow of its potential. It’s not just user safety at stake. Institutional adoption, long seen as the tipping point for crypto’s maturation, is lagging in part because privacy solutions are underdeveloped. Financial institutions and enterprises cannot embrace systems that force them to reveal business-sensitive transactions to competitors and regulators alike. Privacy is not the enemy of compliance; it’s a prerequisite for serious engagement. ... First, policymakers must move past the false binary of privacy versus compliance. These are not mutually exclusive goals. Clear guidelines that embrace advanced cryptography, establish safe harbors for privacy-preserving innovation, and differentiate between consumer protection and surveillance will enable the next generation of secure digital finance. Second, industry leaders need to elevate privacy to the level of consensus mechanisms, scalability, and user experience. 


How scientists are trying to use AI to unlock the human mind

In one of the studies, researchers transformed a large language model into what they refer to as a “foundation model of human cognition.” Out of the box, large language models aren’t great at mimicking human behavior—they behave logically in settings where humans abandon reason, such as casinos. So the researchers fine-tuned Llama 3.1, one of Meta’s open-source LLMs, on data from a range of 160 psychology experiments, which involved tasks like choosing from a set of “slot machines” to get the maximum payout or remembering sequences of letters. ... Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. ... The second of the two Nature studies focuses on minuscule neural networks—some containing only a single neuron—that nevertheless can predict behavior in mice, rats, monkeys, and even humans. Because the networks are so small, it’s possible to track the activity of each individual neuron and use that data to figure out how the network is producing its behavioral predictions. 


New Study Reveals True AI Capabilities And Job Replacement Risk

For business leaders, this framework offers something really valuable: a reality check that cuts through vendor marketing speak. When a sales representative promises their AI solution will "revolutionize your operations," you can now ask pointed questions about which capability levels their system actually achieves and in which specific domains. The gap analysis between current AI capabilities and the requirements of specific business tasks becomes clearer when standardized benchmarks are in place. Consider customer service, where companies are deploying AI chatbots with the enthusiasm of gold rush prospectors. The OECD framework suggests that while AI can handle structured interactions reasonably well, anything requiring genuine social intelligence, nuanced problem-solving, or creative thinking quickly exposes current limitations. This doesn't mean AI isn't useful in customer service, but it helps set realistic expectations about what human oversight will still be necessary. It's the difference between using AI as a sophisticated tool versus expecting it to be a replacement employee. One approach leads to productivity gains; the other leads to customer complaints and public relations disasters.


Why EU Policy Must Catch Up to the Neurotechnology Boom

After conducting a comprehensive analysis of nearly 300 neurotechnology companies worldwide, the Center for Future Generations discovered a surprising trend: among firms fully dedicated to neurotech, consumer firms now outnumber medical ones, making up 60% of the global neurotechnology landscape. And they're proliferating at an unprecedented rate—more than quadrupling in the past decade compared to the previous 25 years. ... EEG, the technology at the heart of this revolution, has been around since the 1920s. It's crude and can't read individual thoughts, but it can detect patterns of brain activity related to focus, fatigue, and even emotional states. And when coupled with artificial intelligence and other personal data—like location, buying behaviors, and biometrics—these patterns can reveal far more about us than we might imagine. ... As this technology moves into the mainstream, the potential for misuse becomes profound. Imagine pre-election advertising that adapts its messaging based on your emotional reaction. Imagine disinformation campaigns tailored to your subconscious fears, measured directly from your brain. Imagine authoritarian governments monitoring emotional responses to propaganda, searching for dissent in citizens' brainwaves. This marks a critical moment for European policymakers.


Enterprises Are Prioritizing Generative AI Spending in 2025

The report, "Generative AI Adoption Index," highlights how organizations are moving gen AI from experimentation to full-scale implementation and offers practical strategies to create business value. CEOs, CTOs and CIOs currently lead most gen AI innovation, but leadership structures are evolving to include specialized AI roles, such as CAIOs, at the highest levels of organizations. ... Along with CAIOs, a thoughtful change management strategy will be critical. The ideal strategy should address operating model changes, data management practices and talent pipelines. Today, just 14% of organizations have a change management strategy, but this will increase to 76% by end of 2026, highlighting a growing recognition of the need for structured adaptation. But a sizable proportion of organizations may still struggle to keep pace with AI-driven transformation, with one in four organizations still lacking a strategy in 2026. ... Third-party vendors are becoming key enablers of gen AI transformation across organizations globally. From supplying outsourced talent to offering services such as cloud computing and storage, these vendors help bridge critical technology and talent gaps. Effective gen AI deployment will depend on strong collaboration between external experts and internal teams. 


AI’s rise demands more from the UK data center market

The growing demand for digital infrastructure, fueled by the surge in AI, has intensified competition for suitable land to build data centers. This scarcity (particularly in London), coupled with the rise in construction and operational costs, makes it difficult to establish data centers in the most efficient and cost-effective manner. Similarly, an over-reliance on well-established technology clusters (such as West London) can increase resource restraints and vulnerability to power outages and downtime. With UK policy frameworks around data centers still evolving, discussions are ongoing around security, energy consumption, and specific regulatory needs. ... Similarly, traditional methods demand a high level of energy consumption to keep AI chips operating at optimal temperatures. Given the energy-intensive nature of air cooling and it being unlikely to keep up with cooling demands, the data center industry is reaching a critical juncture: stifle the capabilities of AI technologies by not integrating effective thermal management, or investing in a more effective, future-thinking approach to cooling? ... The UK’s data center expansion is not just a scaling project, it is a rethinking of what data centers and associated cooling infrastructures must become. 


Why CISOs are making the SASE switch: Fewer vendors, smarter security, better AI guardrails

“SASE is an existential threat to all appliance-based network security companies,” Shlomo Kramer, Cato’s CEO, told VentureBeat. “The vast majority of the market is going to be refactored from appliances to cloud service, which means SASE [is going to be] 80% of the market.” A fundamental architectural transformation is driving that shift. SASE converges traditionally siloed networking and security functions into a single, cloud-native service edge. It combines SD-WAN with critical security capabilities, including secure web gateway (SWG), cloud access security broker (CASB) and ZTNA to enforce policy and protect data regardless of where users or workloads reside. ... The SASE consolidation wave reveals how enterprises are fundamentally rethinking security architecture. With AI attacks exploiting integration gaps instantly, single-vendor SASE has become essential for both protection and operational efficiency. The reasoning is straightforward. Every vendor handoff creates vulnerability. Each integration adds latency. Security leaders know that unified platforms can help eliminate these risks while enabling business velocity. CISOs are increasingly demanding a single console, a single agent and unified policies. 


CISOs urged to fix API risk before regulation forces their hand

The widespread use of APIs to support mobile apps, cloud services, and partner integrations means that the attack surface has changed. But the security practices often haven’t. APIs today handle everything from identity claims and cardholder data to health and account information. Yet in many organizations, they remain outside the scope of standard security programs. ... Oppenheim added that meaningful oversight at the board level doesn’t require technical fluency. “Board-level metrics in such a technically complex space can be difficult to surface meaningfully, but there are still effective ways to guide oversight and investment. Directors should ask which recognised standards (e.g. FAPI) have been adopted or are in the roadmap, and whether the organization has applied a maturity model or framework to benchmark its current posture and track improvements over time.” ... So far, the biggest improvements in API security have come either through direct regulation or industry-led mandates. But pressure is building elsewhere. “Again, organizational size plays a key role,” said Oppenheim. “Larger firms and infrastructure providers are already moving ahead voluntarily – not just in banking, but in payments and identity platforms – because they see strong API security as a necessary foundation for scale and trust.”

Daily Tech Digest - July 07, 2025


Quote for the day:

"To live a creative life, we must lose our fear of being wrong." -- Anonymous


Forget the hype — real AI agents solve bounded problems, not open-world fantasies

When people imagine AI agents today, they tend to picture a chat window. A user types a prompt, and the agent responds with a helpful answer (maybe even triggers a tool or two). That’s fine for demos and consumer apps, but it’s not how enterprise AI will actually work in practice. In the enterprise, most useful agents aren’t user-initiated, they’re autonomous. They don’t sit idly waiting for a human to prompt them. They’re long-running processes that react to data as it flows through the business. They make decisions, call services and produce outputs, continuously and asynchronously, without needing to be told when to start. ... The problems worth solving in most businesses are closed-world: Problems with known inputs, clear rules and measurable outcomes. But the models we’re using, especially LLMs, are inherently non-deterministic. They’re probabilistic by design. The same input can yield different outputs depending on context, sampling or temperature. That’s fine when you’re answering a prompt. But when you’re running a business process? That unpredictability is a liability. ... Closed-world problems don’t require magic. They need solid engineering. And that means combining the flexibility of LLMs with the structure of good software engineering. 


Has CISO become the least desirable role in business?

Being a CISO today is not for the faint of heart. To paraphrase Rodney Dangerfield, CISOs (some, anyway) get no respect. You’d think in a job where perpetual stress over the threat of a cyberattack is the norm, there would be empathy for security leaders. Instead, they face the growing challenge of trying to elicit support across departments and managing security threats, according to a recent report from WatchGuard. ... It’s no secret CISOs are under tremendous pressure. “They’ve got the regulatory scrutiny, they’ve got public visibility,” along with the increasing complexity of threats, and “AI is just adding to that fire, and the mismatch between the accountability and the authority,” says Myers, who wrote “The CISO Dilemma,” which explores CISO turnover rates and how companies can change that moving forward. Often, CISOs don’t have the mandate to influence the business systems or processes that are creating that risk, she says. “I think that’s a real disconnect and that’s what’s really driving the burnout and turnover.” ... Some CISOs are stepping back from operational roles into more advisory ones. Patricia Titus, who recently took a position as a field CISO at startup Abnormal AI after 25 years as a CISO, does not think the CISO role has become less desirable. “The regulatory scrutiny has been there all along,” she says. “It’s gotten a light shined on it.


Enforcement Gaps in India’s DPDP Act and the case for decentralized data protection boards

The DPDP Act’s centralized enforcement model suffers from structural weaknesses that hinder effective data protection. A primary concern is the lack of independence of the Data Protection Board. Because the DPB is both appointed and funded by the Union government, with its officials classified as civil servants under central rules , it does not enjoy the institutional autonomy typically expected of a watchdog agency. ... By design, the executive branch holds decisive power over who sits on the Board and can even influence its operations through service rules. This raises a conflict of interest, given that the government itself is a major collector and processor of citizens’ data. In the words of Justice B.N. Srikrishna, having a regulator under government control is problematic “since the State will be the biggest data processor” – a regulator must be “free from the clutches of the Government” to fairly oversee both private and government actors . ... Another structural limitation is the potential for executive interference in enforcement actions, which dilutes accountability. The DPDP Act contains provisions such as Section 27(3) enabling the Central Government to issue directions that the DPB “may modify or suspend” its own orders based on a government reference . 


The Good AI: Cultivating Excellence Through Data

In today’s enterprise landscape, the quality of AI systems depends fundamentally on the data that flows through them. While most organizational focus remains on AI models and algorithms, it’s the often-under-appreciated current of data flowing through these systems that truly determines whether an AI application becomes “good AI” or problematic technology. Just as ancient Egyptians developed specialized irrigation techniques to cultivate flourishing agriculture, modern organizations must develop specialized data practices to cultivate AI that is effective, ethical, and beneficial. My new column, “The Good AI,” will examine how proper data practices form the foundation for responsible and high-performing AI systems. We’ll explore how organizations can channel their data resources to create AI applications that are not just powerful, but trustworthy, inclusive, and aligned with human values. ... As organizations increasingly integrate artificial intelligence into their operations, the need for robust AI governance has never been more critical. However, establishing effective AI governance doesn’t happen in a vacuum—it must be built upon the foundation of solid data governance practices. The path to responsible AI governance varies significantly depending on your organization’s current data governance maturity level.


AI Infrastructure Inflection Point: 60% Cloud Costs Signal Time to Go Private

Perhaps the most immediate challenge facing IT teams identified in the research is the dramatic cost scaling of public cloud AI workloads. Unlike traditional applications where cloud costs scale somewhat linearly, AI workloads create exponential cost curves due to their intensive compute and storage requirements. The research identifies a specific economic threshold where cloud costs become unsustainable. When monthly cloud spending for a given AI workload reaches 60-70% of what it would cost to purchase and operate dedicated GPU-powered infrastructure, organizations hit their inflection point. At this threshold, the total cost of ownership calculation shifts decisively toward private infrastructure. IT teams can track this inflection point by monitoring data and model-hosting requirements relative to GPU transaction throughput. ... Identifying when to move from a public cloud to private cloud or some form of on-premises deployment is critical. Thomas noted that there are many flavors of hybrid FinOps tooling available in the marketplace that, when configured appropriately for an environment, will spot trend anomalies. Anomalies may be triggered by swings in GPU utilization, costs per token/inferences, idle percentages, and data-egress fees. On-premises factors include material variations in hardware, power, cooling, operations, and more over a set period of time.


AI built it, but can you trust it?

AI isn’t inherently bad nor inherently good from a security perspective. It’s another tool that can accelerate and magnify both good and bad behaviors. On the good side, if models can learn to assess the vulnerability state and general trustworthiness of app components, and factor that learning into code they suggest, AI can have a positive impact on the security of the resultant output. Open source projects can already leverage AI to help find potential vulnerabilities and even submit PRs to address them, but there still needs to be significant human oversight to ensure that the results actually improve the project’s security. ... If you simply trust an AI to generate all the artifacts needed to build, deploy, and run anything sophisticated it will be very difficult to know if it’s done so well and what risks it’s mitigated. In many ways, this looks a lot like the classic “curl and pipe to bash” kinds of risks that have long existed where users put blind trust in what they’re getting from external sources. Many times that can work out fine but sometimes it doesn’t. ... AI can create impressive results quickly but it doesn’t necessarily prioritize security and may in fact make many choices that degrade it. Have good architectures and controls and human experts that really understand the recommendations it’s making and can adapt and re-prompt as necessary to provide the right balance.


How to shift left on finops, and why you need to

Building cost awareness in devops requires asking an upfront question when spinning up new cloud environments. Developers and data scientists should ask if the forecasted cloud and other costs align with the targeted business value. When cloud costs do increase because of growing utilization, it’s important to relate the cost escalation to whether there’s been a corresponding increase in business value. The FinOps Foundation recommends that SaaS and cloud-driven commercial organizations measure cloud unit economics. The basic measure calculates the difference between marginal cost and marginal revenue and determines where cloud operations break even and begin to generate a profit. Other companies can use these concepts to correlate business value and cost and make smarter cloud architecture and automation decisions. ... “Engineers especially can get tunnel vision on delivering features and the art of code, and cost modeling should happen as a part of design, at the start of a project, not at the end,” says Mason of RecordPoint. “Companies generally limit the staff with access to and knowledge of cloud cost data, which is a mistake. Companies should strive to spread awareness of costs, educating users of services with the highest cost impacts, so that more people recognize opportunities to optimize or eliminate spend.”


How Cred Built Its Observability-Led Tech Stack

Third-party integrations are critical to any fintech ecosystem, and at Cred, we manage them through a rigorous, life cycle-based third-party risk management framework. This approach is designed to minimize risk and maximize reliability, with security and resilience built in from the start. Before onboarding any external partner, whether for KYC, APIs or payment rails, we conduct thorough due diligence to evaluate their security posture. Each partner is categorized as high, medium or low risk, which then informs the depth and frequency of ongoing assessments. These reviews go well beyond standard compliance checks. ... With user goals validated, our teams then move into secure architecture design. Every integration point, data exchange and system interaction are examined to preempt vulnerabilities and ensure that sensitive information is protected by default. We use ThreatShield, an internal AI-powered threat-modeling tool, to analyze documentation and architecture against the Stride framework, a threat model designed by Microsoft that is used in cybersecurity to identify potential security threats to applications and systems. This architecture-first thinking enables us to deliver powerful features, such as surfacing hidden charges in smart statements or giving credit insights without ever compromising the user's data or experience.


How To Tackle Tech Debt Without Slowing Innovation

Implement a “boy scout rule” under which developers are encouraged to make small improvements to existing code during feature work. This maintains development momentum while gradually improving code quality, and developers are more motivated to clean up code they’re already actively working with. ... Proactively analyze user engagement metrics to pinpoint friction points where users spend excessive time. Prioritize these areas for targeted debt reduction, aligning technical improvements closely with meaningful user experience enhancements. ... Pre-vacation handovers are an excellent opportunity to reduce tech debt. Planning and carrying out handovers before we take a holiday are crucial to maintaining smooth IT operations. Giving your employees the choice to hand tasks over to automation or a human colleague can help reduce tech debt and automate tasks. Critically, it utilizes time already allocated for addressing this work. ... Resolving technical debt is development. The Shangri-la of “no tech debt” does not survive contact with reality. It’s a balance of doing what’s right for the business. Making sure the product and engineering teams are on the same page is critical. You should have sprints where tech debt is the focus.


Why cybersecurity should be seen as a business enabler, not a blocker

Among the top challenges facing the IT sector today, says Jackson, is the rapid development of the tech world. “The pace of change is outpacing many organisations’ ability to adapt securely – whether due to AI, rapid cloud adoption, evolving regulatory frameworks like DORA, or the ongoing shortage of skilled cybersecurity professionals,” he says. “These challenges, combined with cost pressures and the perception that security is not always an enabler, make adaptation even harder.” AI in particular, to no surprise, is having a significant effect on the cybersecurity world – reshaping both sides of the “cybersecurity battlefield”, according to Jackson. “We’re seeing attackers utilise large language models (LLMs) like ChatGPT to scale social engineering and refine malicious code, while defenders are using the same tools (or leveraging them in some way) to enhance threat detection, streamline triage and gain broader context at much greater speed,” he says. While he doesn’t believe AI will have as great an impact as some suggest, he says it still represents an “exciting evolution”, particularly in how it can benefit organisations. “AI won’t replace individuals such as SOC analysts anytime soon, but it can augment and support their roles freeing up time to focus on higher priority tasks,” he says.

Daily Tech Digest - July 05, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


The Hidden Data Cost: Why Developer Soft Skills Matter More Than You Think

The logic is simple but under-discussed: developers who struggle to communicate with product owners, translate goals into architecture, or anticipate system-wide tradeoffs are more likely to build the wrong thing, need more rework, or get stuck in cycles of iteration that waste time and resources. These are not theoretical risks, they’re quantifiable cost drivers. According to Lumenalta’s findings, organizations that invest in well-rounded senior developers, including soft skill development, see fewer errors, faster time to delivery, and stronger alignment between technical execution and business value. ... The irony? Most organizations already have technically proficient talent in-house. What they lack is the environment to develop those skills that drive high-impact outcomes. Senior developers who think like “chess masters”—a term Lumenalta uses for those who anticipate several moves ahead—can drastically reduce a project’s TCO by mentoring junior talent, catching architecture risks early, and building systems that adapt rather than break under pressure. ... As AI reshapes every layer of tech, developers who can bridge business goals and algorithmic capabilities will become increasingly valuable. It’s not just about knowing how to fine-tune a model, it’s about knowing when not to.


Why AV is an overlooked cybersecurity risk

As cyber attackers become more sophisticated, they’re shifting their attention to overlooked entry points like AV infrastructure. A good example is YouTuber Jim Browning’s infiltration of a scam call center, where he used unsecured CCTV systems to monitor and expose criminals in real time. This highlights the potential for AV vulnerabilities to be exploited for intelligence gathering. To counter these risks, organizations must adopt a more proactive approach. Simulated social engineering and phishing attacks can help assess user awareness and expose vulnerabilities in behavior. These simulations should be backed by ongoing training that equips staff to recognize manipulation tactics and understand the value of security hygiene. ... To mitigate the risks posed by vulnerable AV systems, organizations should take a proactive and layered approach to security. This includes regularly updating device firmware and underlying software packages, which are often left outdated even when new versions are available. Strong password policies should be enforced, particularly on devices running webservers, with security practices aligned to standards like the OWASP Top 10. Physical access to AV infrastructure must also be tightly controlled to prevent unauthorized LAN connections. 


EU Presses for Quantum-Safe Encryption by 2030 as Risks Grow

The push comes amid growing concern about the long-term viability of conventional encryption techniques. Current security protocols rely on complex mathematical problems — such as factoring large numbers — that would take today’s classical computers thousands of years to solve. But quantum computers could potentially crack these systems in a fraction of the time, opening the door to what cybersecurity experts refer to as “store now, decrypt later” attacks. In these attacks, hackers collect encrypted data today with the intention of breaking the encryption once quantum technology matures. Germany’s Federal Office for Information Security (BSI) estimates that conventional encryption could remain secure for another 10 to 20 years in the absence of sudden breakthroughs, The Munich Eye reports. Europol has echoed that forecast, suggesting a 15-year window before current systems might be compromised. While the timeline is uncertain, European authorities agree that proactive planning is essential. PQC is designed to resist attacks from both classical and quantum computers by using algorithms based on different kinds of hard mathematical problems. These newer algorithms are more complex and require different computational strategies than those used in today’s standards like RSA and ECC. 


MongoDB Doubles Down on India's Database Boom

Chawla says MongoDB is helping Indian enterprises move beyond legacy systems through two distinct approaches. "The first one is when customers decide to build a completely new modern application, gradually sunsetting the old legacy application," he explains. "We work closely with them to build these modern systems." ... Despite this fast-paced growth, Chawla points out several lingering myths in India. "A lot of customers still haven't realised that if you want to build a modern application especially one that's AI-driven you can't build it on a relational structure," he explains. "Most of the data today is unstructured and messy. So you need a database that can scale, can handle different types of data, and support modern workloads." ... Even those trying to move away from traditional databases often fall into the trap of viewing PostgreSQL as a modern alternative. "PostgreSQL is still relational in nature. It has the same row-and-column limitations and scalability issues." He also adds that if companies want to build a future-proof application especially one that infuses AI capabilities they need something that can handle all data types and offers native support for features like full-text search, hybrid search, and vector search. Other NoSQL players such as Redis and Apache Cassandra also have significant traction in India.


AI only works if the infrastructure is right

The successful implementation of artificial intelligence is therefore closely linked to the underlying infrastructure. But how you define that AI infrastructure is open to debate. An AI infrastructure always consists of different components, which is clearly reflected in the diverse backgrounds of the participating parties. As a customer, how can you best assess such an AI infrastructure? ... For companies looking to get started with AI infrastructure, a phased approach is crucial. Start small with a pilot, clearly define what you want to achieve, and expand step by step. The infrastructure must grow with the ambitions, not the other way around. A practical approach must be based on the objectives. Then the software, middleware, and hardware will be available. For virtually every use case, you can choose from the necessary and desired components. ... At the same time, the AI landscape requires a high degree of flexibility. Technological developments are rapid, models change, and business requirements can shift from quarter to quarter. It is therefore essential to establish an infrastructure that is not only scalable but also adaptable to new insights or shifting objectives. Consider the possibility of dynamically scaling computing capacity up or down, compressing models where necessary, and deploying tooling that adapts to the requirements of the use case. 


Software abstraction: The missing link in commercially viable quantum computing

Quantum Infrastructure Software delivers this essential abstraction, turning bare-metal QPUs into useful devices, much the way data center providers integrate virtualization software for their conventional systems. Current offerings cover all of the functions typically associated with the classical BIOS up through virtual machine Hypervisors, extending to developer tools at the application level. Software-driven abstraction of quantum complexity away from the end users lets anyone, irrespective of their quantum expertise, leverage quantum computing for the problems that matter most to them. ... With a finely tuned quantum computer accessible, a user must still execute many tasks to extract useful answers from the QPU, in analogy with the need for careful memory management required to gain practical acceleration with GPUs. Most importantly, in executing a real workload, they must convert high-level “assembly-language” logical definitions of quantum applications into hardware-specific “machine-language” instructions that account for the details of the QPU in use, and deploy countermeasures where errors might leak in. These are typically tasks that can only be handled by (expensive!) specialists in quantum-device operation.


Guest Post: Why AI Regulation Won’t Work for Quantum

Artificial intelligence regulation has been in the regulatory spotlight for the past seven to ten years and there is no shortage of governments and global institutions, as well as corporations and think tanks, putting forth regulatory frameworks in response to this widely buzzy tech. AI makes decisions in a “black box,” creating a need for “explainability” in order to fully understand how determinations by these systems affect the public. With the democratization of AI systems, there is the potential for bad actors to create harm in a decentralized ecosystem. ... Because quantum systems do not learn on their own, evolve over time, or make decisions based on training data, they do not pose the same kind of existential or social threats that AI does. Whereas the implications of quantum breakthroughs will no doubt be profound, especially in cryptography, defense, drug development, and material science, the core risks are tied to who controls the technology and for what purpose. Regulating who controls technology and ensuring bad actors are disincentivized from using technology in harmful ways is the stuff of traditional regulation across many sectors, so regulating quantum should prove somewhat less challenging than current AI regulatory debates would suggest.


Validation is an Increasingly Critical Element of Cloud Security

Security engineers simply don’t have the time or resources to familiarize themselves with the vast number of cloud services available today. In the past, security engineers primarily needed to understand Windows and Linux internals, Active Directory (AD) domain basics, networks and some databases and storage solutions. Today, they need to be familiar with hundreds of cloud services, from virtual machines (VMs) to serverless functions and containers at different levels of abstraction. ... It’s also important to note that cloud environments are particularly susceptible to misconfigurations. Security teams often primarily focus on assessing the performance of their preventative security controls, searching for weaknesses in their ability to detect attack activity. But this overlooks the danger posed by misconfigurations, which are not caused by bad code, software bugs, or malicious activity. That means they don’t fall within the definition of “vulnerabilities” that organizations typically test for—but they still pose a significant danger.  ... Securing the cloud isn’t just about having the right solutions in place — it’s about determining whether they are functioning correctly. But it’s also about making sure attackers don’t have other, less obvious ways into your network.


Build and Deploy Scalable Technical Architecture a Bit Easier

A critical challenge when transforming proof-of-concept systems into production-ready architecture is balancing rapid development with future scalability. At one organization, I inherited a monolithic Python application that was initially built as a lead distribution system. The prototype performed adequately in controlled environments but struggled when processing real-world address data, which, by their nature, contain inconsistencies and edge cases. ... Database performance often becomes the primary bottleneck in scaling systems. Domain-Driven Design (DDD) has proven particularly valuable for creating loosely coupled microservices, with its strategic phase ensuring that the design architecture properly encapsulates business capabilities, and the tactical phase allowing the creation of domain models using effective design patterns. ... For systems with data retention policies, table partitioning proved particularly effective, turning one table into several while maintaining the appearance of a single table to the application. This allowed us to implement retention simply by dropping entire partition tables rather than performing targeted deletions, which prevented database bloat. These optimizations reduced average query times from seconds to milliseconds, enabling support for much higher user loads on the same infrastructure.


What AI Policy Can Learn From Cyber: Design for Threats, Not in Spite of Them

The narrative that constraints kill innovation is both lazy and false. In cybersecurity, we’ve seen the opposite. Federal mandates like the Federal Information Security Modernization Act (FISMA), which forced agencies to map their systems, rate data risks, and monitor security continuously, and state-level laws like California’s data breach notification statute created the pressure and incentives that moved security from afterthought to design priority.  ... The irony is that the people who build AI, like their cybersecurity peers, are more than capable of innovating within meaningful boundaries. We’ve both worked alongside engineers and product leaders in government and industry who rise to meet constraints as creative challenges. They want clear rules, not endless ambiguity. They want the chance to build secure, equitable, high-performing systems — not just fast ones. The real risk isn’t that smart policy will stifle the next breakthrough. The real risk is that our failure to govern in real time will lock in systems that are flawed by design and unfit for purpose. Cybersecurity found its footing by designing for uncertainty and codifying best practices into adaptable standards. AI can do the same if we stop pretending that the absence of rules is a virtue.