Showing posts with label digital certificate. Show all posts
Showing posts with label digital certificate. Show all posts

Daily Tech Digest - March 16, 2026


Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 23 mins • Perfect for listening on the go.


Why many enterprises struggle with outdated digital systems & how to fix them

The article on Express Computer, "Why many enterprises struggle with outdated digital systems & how to fix them," explores the pervasive issue of legacy technical debt. Many organizations remain tethered to aging infrastructure that stifles innovation and hampers agility. The struggle often stems from the prohibitive costs of replacement, the immense complexity of migrating mission-critical processes, and a fundamental fear of business disruption. Governance layers and siloed ownership further exacerbate these challenges, creating compounding "enterprise debt" across processes, data, and talent. To address these bottlenecks, the author advocates for a strategic shift toward a product mindset and incremental modernization instead of high-risk, wholesale replacements. Recommended fixes include mapping system dependencies, quantifying inefficiencies, and following a clear roadmap that progresses from stabilization to systematic optimization. By decoupling tightly integrated components and establishing clear ownership, enterprises can transform their brittle legacy systems into scalable, resilient assets. Fostering a culture of continuous improvement and aligning digital transformation with core business objectives are equally vital for survival. Ultimately, the piece emphasizes that overcoming outdated digital systems is a strategic necessity in a fast-paced market, requiring a balanced approach to technical remediation and organizational change to ensure long-term competitiveness.


COBOL developers will always be needed, even as AI takes the lead on modernization projects

The article from ITPro explores the enduring necessity of COBOL developers amidst the rise of artificial intelligence in legacy modernization projects. While AI is increasingly being marketed as a "silver bullet" for converting ancient COBOL codebases into modern languages like Java, industry experts argue that these digital transformations cannot succeed without human domain expertise. COBOL remains the backbone of global financial and administrative systems, housing decades of intricate business logic that AI often fails to interpret accurately. The piece emphasizes that while generative AI can significantly accelerate code translation and documentation, it lacks the contextual understanding required to define what a successful transformation actually looks like. Consequently, veteran developers are essential for overseeing AI-driven migrations, identifying potential risks, and ensuring that the logic preserved in the legacy system is correctly replicated in the new environment. Rather than replacing the workforce, AI acts as a collaborative tool that shifts the developer's role from manual coding to strategic orchestration. Ultimately, the survival of critical infrastructure depends on a hybrid approach that combines the speed of machine learning with the deep-seated knowledge of COBOL specialists, proving that legacy expertise is more valuable than ever in the modern era.


The CTO is dead. Long live the CTO

In the article "The CTO is dead. Long live the CTO" on CIO.com, Marios Fakiolas argues that the traditional role of the Chief Technology Officer as a technical gatekeeper and "human compiler" has become obsolete due to the rise of advanced AI. Modern Large Language Models can now design complex system architectures in minutes, outperforming humans in handling multidimensional constraints and technical interdependencies. Consequently, the new era demands a "multiplier" who shifts focus from providing technical answers to architecting systems that enable continuous organizational intelligence. Today’s CTO is measured not by architectural purity, but by tangible business outcomes such as gross margin, ROI, and operational velocity. This evolution requires leaders to move beyond their "AI comfort zone" of fancy demos and instead tackle difficult structural challenges like cost optimization and team restructuring. The author emphasizes that the modern leader must lead from the front, ruthlessly killing legacy "darlings" and designing for impermanence rather than static stability. Ultimately, the successful CTO must transition from being a bottleneck to becoming an orchestrator of AI agents and human expertise, ensuring that the entire organization can pivot rapidly without trauma. By embracing this proactive mindset, technology leaders can transcend the gatekeeping era and drive meaningful innovation in a fierce, AI-driven market.


When insider risk is a wellbeing issue, not just a disciplinary one

In the article "When insider risk is a wellbeing issue, not just a disciplinary one" on Security Boulevard, Katie Barnett argues for a paradigm shift in how organizations manage insider threats. Moving beyond traditional framing—which often focuses on malicious intent and punitive disciplinary measures—the author highlights that many security incidents are actually the byproduct of employee stress, fatigue, and disengagement. In a modern work environment characterized by digital isolation and economic uncertainty, personal strains such as financial pressure or burnout can erode professional judgment, making individuals more susceptible to manipulation or unintentional policy violations. The piece emphasizes that relying solely on technical controls and monitoring is insufficient; these tools do not address the underlying human factors that lead to risk. Instead, Barnett advocates for a proactive approach where wellbeing is treated as a core pillar of organizational resilience. This involves training managers to recognize early behavioral warning signs, fostering a supportive culture where staff feel safe raising concerns, and creating interdepartmental cooperation between HR and security teams. Ultimately, the article posits that by integrating support and psychological safety into the security strategy, organizations can prevent incidents before they escalate, strengthening their overall security posture through empathy rather than just compliance.


What it takes to win that CSO role

In the CSO Online article "What it takes to win that CSO role," David Weldon explores the transformation of the Chief Security Officer position into a high-stakes C-suite role requiring board-level accountability. No longer a back-office function, the modern CSO operates at the critical intersection of technology, regulatory exposure, revenue continuity, and brand trust. Achieving success in this position demands a shift from being a "cost center" to a "trust center," where security is positioned as a strategic business enabler that supports revenue growth rather than just a preventative measure. Key requirements include deep expertise in identity and access management and a sophisticated understanding of emerging threats like shadow AI, data poisoning, and model risk. Beyond technical prowess, financial acumen is non-negotiable; aspiring CSOs must translate security investments into business value, such as reduced insurance premiums or contractual leverage. Communication is paramount, as the role involves constant negotiation and the ability to translate complex risks for non-technical stakeholders. Ultimately, winning the role requires aligning accountability with authority and demonstrating the operating depth to maintain business resilience during sustained outages. By evolving from a "no" person to a "how" person, successful CSOs ensure that security becomes a foundational pillar of organizational success and customer confidence.


Human-Centered AI Is Becoming A Leadership Imperative

In his Forbes article, "Human-Centered AI Is Becoming A Leadership Imperative," Rhett Power argues that while artificial intelligence offers unprecedented industrial opportunities, its successful implementation depends entirely on a shift from technical obsession to human-centric leadership. Power contends that unchecked AI deployment often fails because it ignores the social and cognitive arrangements necessary for technology to thrive. To bridge the widening gap between technological promise and actual business value, leaders must adopt three foundational principles: prioritizing desired business outcomes over specific tools, evolving training to support role-specific enablement, and treating human-centered design as a core competitive advantage. Power identifies a new leadership paradigm where executives must serve as visionary guides who align AI with human values, ethical guardians who ensure transparency and bias mitigation, and human advocates who prioritize employee experience. By focusing on augmenting rather than replacing human expertise, organizations can transform AI into a seamless collaborative partner that drives long-term resilience and innovation. Ultimately, the article emphasizes that the true value of AI lies in its ability to extend the reach of human judgment, making the integration of empathy and ethical oversight a non-negotiable requirement for modern executive accountability in a rapidly evolving digital landscape.


Employee Experience 2.0: AI as the Performance Engine of the Work Operating System

In the article "Employee Experience 2.0: AI as the Performance Engine of the Work Operating System," Jeff Corbin outlines an essential evolution in workplace management. While the first version of the Employee Experience (EX 1.0) focused on cross-departmental alignment between HR, IT, and Communications, the author argues that human capacity alone is no longer sufficient to manage the modern digital workspace. EX 2.0 introduces artificial intelligence as a "performance layer" that transforms the work operating system from a static framework into a self-optimizing engine. AI addresses critical challenges such as "digital friction"—where employees waste nearly 30% of their day searching through disconnected systems like SharePoint and ServiceNow—by acting as an automated editor for content governance. Beyond cleaning up data, AI-driven EX 2.0 enables hyper-personalization of communications and provides predictive analytics that can identify turnover risks or workflow bottlenecks before they escalate. By integrating AI as a core architectural component, organizations can move beyond manual coordination to create a frictionless environment that boosts engagement and productivity. Ultimately, the piece calls for leaders to upgrade their governance models, positioning AI not just as a tool, but as a collaborative partner that ensures the employee experience remains agile and effective in a technology-driven era.


The Next Era of UX and Analytics, and Merging Conversational AI with Design-to-Code

The article "The Transformation of Software Development: Smarter UI Components, the Next Era of UX and Analytics" explores the profound shift from static, reactive user interfaces to proactive, intelligent systems. Modern software development is evolving beyond standard component libraries toward "smarter" UI elements that leverage embedded analytics and machine learning to adapt to user behavior in real-time. This transformation allows digital interfaces to anticipate user needs, personalize layouts dynamically, and optimize complex workflows without manual intervention. By integrating sophisticated telemetry directly into front-end components, developers gain granular, actionable insights into performance and engagement, effectively bridging the gap between user experience and technical execution. This evolution significantly impacts the modern DevOps lifecycle, as development teams move from building isolated features to orchestrating continuous learning environments. The article further highlights that these intelligent components reduce the cognitive load for end-users by surfacing relevant information and simplifying intricate navigations. Ultimately, the synergy between advanced data analytics and front-end engineering is setting a new industry standard for digital excellence, where personalization and efficiency are core to the process. Organizations that embrace this era of "smarter" components will deliver highly tailored experiences that drive superior retention and user satisfaction in an increasingly competitive market.


Certificate lifespans are shrinking and most organizations aren’t ready

The article "Certificate lifespans are shrinking and most organizations aren't ready," featured on Help Net Security, outlines the critical challenges businesses face as TLS certificate validity periods compress from one year down to 47 days. John Murray of GlobalSign emphasizes that this rapid shift, driven by browser requirements, necessitates a complete overhaul of traditional manual certificate management. To avoid operational disruptions and outages, organizations must prioritize "discovery" as the foundational step, utilizing tools like GlobalSign's Atlas or LifeCycle X to inventory every certificate and platform. This proactive approach is not only vital for managing shorter lifecycles but also serves as essential preparation for the eventual migration to post-quantum cryptography. Murray suggests that manual spreadsheets are no longer sustainable; instead, businesses should adopt automation protocols like ACME and shift toward flexible, SAN-based licensing models to remove procurement friction. While larger enterprises may have dedicated PKI teams, mid-market and smaller organizations are at a higher risk of being caught off guard. By establishing automated renewal pipelines and closing the specialized knowledge gap in PKI expertise, companies can build a resilient security posture. Ultimately, the window for preparation is closing, and integrating automated lifecycle management is now a strategic imperative rather than a future luxury.


Agoda CTO on why AI still needs human oversight

In the Tech Wire Asia article, Agoda’s Chief Technology Officer, Idan Zalzberg, discusses the essential role of human oversight in an era dominated by artificial intelligence. While AI tools have significantly accelerated developer workflows and boosted productivity—with early experiments at Agoda showing a 27% uplift—Zalzberg emphasizes that these technologies remain supplementary. The primary challenge lies in the inherent unpredictability and non-deterministic nature of generative AI, which differs from traditional software by producing inconsistent outputs. Consequently, Agoda maintains a strict policy where human engineers remain fully accountable for all code, regardless of its origin. Quality control remains rigorous, utilizing the same static analysis and automated testing frameworks applied to human-written scripts. Zalzberg notes that the evolution of the engineering role shifts focus toward critical thinking, strategic decision-making, and "evaluation"—a statistical method for assessing AI performance. Beyond technical management, the article highlights how cultural attitudes toward risk influence AI adoption rates across different regions. Ultimately, Zalzberg argues that AI maturity is defined by a balanced approach: leveraging the speed of automation while ensuring that sensitive decisions—such as pricing or critical architecture—are governed by human judgment and a centralized gateway to manage security and costs effectively.

Daily Tech Digest - October 02, 2025


Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer


AI cost overruns are adding up — with major implications for CIOs

Many organizations appear to be “flying blind” while deploying AI, adds John Pettit, CTO at Google Workspace professional services firm Promevo. If a CIO-led AI project misses budget by a huge margin, it reflects on the CIO’s credibility, he adds. “Trust is your most important currency when leading projects and organizations,” he says. “If your AI initiative costs 50% more than forecast, the CFO and board will hesitate before approving the next one.” ... Beyond creating distrust in IT leadership, missed cost estimates also hurt the company’s bottom line, notes Farai Alleyne, SVP of IT operations at accounts payable software vendor Billtrust. “It is not just an IT spending issue, but it could materialize into an overall business financials issue,” he says. ... enterprise leaders often assume AI coding assistants or no-code/low-code tools can take care of most of the software development needed to roll out a new AI tool. These tools can be used to create small prototypes, but for enterprise-grade integrations or multi-agent systems, the complexity creates additional costs, he says. ... In addition, organizations often underestimate the cost of operating an AI project, he says. Token usage for vectorization and LLM calls can cost tens of thousands of dollars per month, but hosting your own models isn’t cheap, either, with on-premises infrastructure costs potentially running into the thousands of dollars per month.


AI-Powered Digital Transformation: A C-Suite Blueprint For The Future Of Business

At its core, digital transformation is a strategic endeavor, not a technological one. To succeed, it should be at the forefront of the organizational strategy. This means moving beyond simply automating existing processes and instead asking how AI enables new ways of creating value. The shift is from operational efficiency to business model innovation. ... True digital leaders possess a visionary mindset and the critical competencies to guide their teams through change. They must be more than tech-savvy; they must be emotionally intelligent and capable of inspiring trust. This demands an intentional effort to develop leaders who can bridge the gap between deep business acumen and digital fluency. ... With the strategic, cultural and data foundations in place, organizations can focus on building a scalable and secure digital infrastructure. This may involve adopting cloud computing to provide flexible resources needed for big data processing and AI model deployment. It can also mean investing in a range of complementary technologies that, when integrated, create a cohesive and intelligent ecosystem. ... Digital transformation is a complex, continuous journey, not a single destination. This framework provides a blueprint, but its success requires leadership. The challenge is not technological; it's a test of leadership, culture and strategic foresight.


Why Automation Fails Without the Right QA Mindset

Automation alone doesn’t guarantee quality — it is only as effective as the tests it is scripted to run. If the requirements are misunderstood, automated tests may pass while critical issues remain undetected. I have seen failures where teams relied solely on automation without involving proper QA practices, leading to tests that validated incorrect behavior. Automation frequently fails to detect new or unexpected issues introduced by system upgrades. It often misses critical problems such as faulty data mapping, incomplete user interface (UI) testing and gaps in test coverage due to outdated scripts. Lack of adaptability is another common obstacle that I’ve repeatedly seen undermine automation testing efforts. When UI elements are tightly coupled, even minor changes can disrupt test cases. With the right QA mindset, this challenge is anticipated — promoting modular, maintainable automation strategies capable of adapting to frequent UI and logic changes. Automation lacks the critical analysis required to validate business logic and perform true end-to-end testing. From my experience, the human QA mindset proved essential during the testing of a mortgage loan calculation system. While automation handled standard calculations and data validation, it could not assess whether the logic aligned with real-world lending rules.


Stop Feeding AI Junk: A Systematic Approach to Unstructured Data Ingestion

Worse, bad data reduces accuracy. Poor quality data not only adds noise, but it also leads to incorrect outputs that can erode trust in AI systems. The result is a double penalty: wasted money and poor performance. Enterprises must therefore treat data ingestion as a discipline in its own right, especially for unstructured data. Many current ingestion methods are blunt instruments. They connect to a data source and pull in everything, or they rely on copy-and-sync pipelines that treat all data as equal. These methods may be convenient, but they lack the intelligence to separate useful information from irrelevant clutter. Such approaches create bloated AI pipelines that are expensive to maintain and impossible to fine-tune. ... Once data is classified, the next step is to curate it. Not all data is equal. Some information may be outdated, irrelevant, or contradictory. Curating data means deliberately filtering for quality and relevance before ingestion. This ensures that only useful content is fed to AI systems, saving compute cycles and improving accuracy. This also ensures that RAG and LLM solutions can utilize their context windows on tokens for relevant data and not get cluttered up with irrelevant junk. ... Generic ingestion pipelines often lump all data into a central bucket. A better approach is to segment data based on specific AI use cases. 


Five critical API security flaws developers must avoid

Developers might assume that if an API endpoint isn’t publicly advertised, it’s inherently secure, a dangerous myth known as “security by obscurity.” This mistake manifests in a few critical ways: developers may use easily guessable API keys or leave critical endpoints entirely unprotected, allowing anyone to access them without proving their identity. ... You must treat all incoming data as untrusted, meaning all input must be validated on the server-side. Your developers should implement comprehensive server-side checks for data types, formats, lengths, and expected values. Instead of trying to block everything that is bad, it is more secure to define precisely what is allowed. Finally, before displaying or using any data that comes back from the API, ensure it is properly sanitized and escaped to prevent injection attacks from reaching end-users. ... Your teams must adhere to the “only what’s necessary” principle by designing API responses to return only the absolute minimum data required by the consuming application. For production environments, configure systems to suppress detailed error messages and stack traces, replacing them with generic errors while logging the specifics internally for your team. ... Your security strategy must incorporate rate limiting to apply strict controls on the number of requests a client can make within a given timeframe, whether tracked by IP address, authenticated user, or API key.


Disaster recovery and business continuity: How to create an effective plan

If your disaster recovery and business continuity plan has been gathering dust on the shelf, it’s time for a full rebuild from the ground up. Key components include strategies such as minimum viable business (MVB); emerging technologies such as AI and generative AI; and tactical processes and approaches such as integrated threat hunting, automated data discovery and classification, continuous backups, immutable data, and gamified tabletop testing exercises. Backup-as-a-service (BaaS) and disaster recovery-as-a-service (DRaaS) are also becoming more popular, as enterprises look to take advantage of the scalability, cloud storage options, and ease-of-use associated with the “as-a-service” model. ... Accenture’s Whelan says that rather than try to restore the entire business in the event of a disaster, a better approach might be to create a skeletal replica of the business, an MVB, that can be spun up immediately to keep mission-critical processes going while traditional backup and recovery efforts are under way. ... The two additional elements are: one offline, immutable, or air-gapped backup that will enable organizations to get back on their feet in the event of a ransomware attack, and a goal of zero errors. Immutable data is “the gold standard,” Whelan says, but there are complexities associated with proper implementation.


Building Intelligence into the Database Layer

At the core of this evolution is the simple architectural idea of the database as an active intelligence engine. Rather than simply recording and serving historical data, an intelligent database interprets incoming signals, transforms them in real-time, and triggers meaningful actions directly from within the database layer. From a developer’s perspective, it still looks like a database, but under the hood, it’s something more: a programmable, event-driven system designed to act on high-velocity data streams with intense precision in real-time. ... Built-in processing engines unlock features like anomaly detection, forecasting, downsampling, and alerting in true real-time. These embedded engines enable real-time computation directly inside the database. Instead of moving data to external systems for analysis or automation, developers can run logic where the data already lives. ... Active intelligence doesn’t just enable faster reactions; it opens the door to proactive strategies. By continuously analyzing streaming data and comparing it to historical trends, systems can anticipate issues before they escalate. For example, gradual changes in sensor behavior can signal the early stages of a failure, giving teams time to intervene. ... Developers need more than just storage and query, they need tools that think. Embedding intelligence into the database layer represents a shift toward active infrastructure: systems that monitor, analyze, and respond at the edge, in the cloud, and across distributed environments.


AI Cybersecurity Arms Race: Are Companies Ready?

Security operations centers were already overwhelmed before AI became mainstream. Human analysts, drowning in alerts, can’t possibly match the velocity of machine-generated threats. Detection tools, built on static signatures and rules, simply can’t keep up with attacks that mutate continuously. The vendor landscape isn’t much more reassuring. Every security company now claims its product is “AI-powered,” but too many of these features are black boxes, immature, or little more than marketing gloss. ... That doesn’t mean defenders are standing still. AI is beginning to reshape cybersecurity on the defensive side, too, and the potential is enormous. Anomaly detection, fueled by machine learning, is allowing organizations to spot unusual behavior across networks, endpoints, and cloud environments far faster than humans ever could. In security operations centers, agentic AI assistants are beginning to triage alerts, summarize incidents, and even kick off automated remediation workflows. ... The AI arms race isn’t something the CISO can handle alone; it belongs squarely in the boardroom. The challenge isn’t just technical — it’s strategic. Budgets must be allocated in ways that balance proven defenses with emerging AI tools that may not be perfect but are rapidly becoming necessary. Security teams must be retrained and upskilled to govern, tune, and trust AI systems. Policies need to evolve to address new risks such as AI model poisoning or unintended bias.


Agentic AI needs stronger digital certificates

The consensus among practitioners is that existing technologies can handle agentic AI – if, that is, organisations apply them correctly from the start. “Agentic AI fits into well-understood security best practices and paradigms, like zero trust,” Wetmore emphasises. “We have the technology available to us – the protocols and interfaces and infrastructure – to do this well, to automate provisioning of strong identities, to enforce policy, to validate least privilege access.” The key is approaching AI agents with security-by-design principles rather than bolting on protection as an afterthought. Sebastian Weir, executive partner and AI Practice Leader at IBM UK&I, sees this shift happening in his client conversations. ... Perhaps the most critical insight from security practitioners is that managing agentic AI isn’t primarily about new technology – it’s about governance and orchestration. The same platforms and protocols that enable modern DevOps and microservices can support AI agents, but only with proper oversight. “Your ability to scale is about how you create repeatable, controllable patterns in delivery,” Weir explains. “That’s where capabilities like orchestration frameworks come in – to create that common plane of provisioning agents anywhere in any platform and then governance layers to provide auditability and control.”


Learning from the Inevitable

Currently, too many organizations follow a “nuke and pave” approach to IR, opting to just reimage computers because they don’t have the people to properly extract the wisdom from an incident. In the short term, this is faster and cheaper but has a detrimental impact on protecting against future threats. When you refuse to learn from past mistakes, you are more prone to repeating them. Conversely, organizations may turn to outsourcing. Experts in managed security services and IR have realized consulting gives them a broader reach and impact over the problem — but none of these are long-term solutions. This kind of short-sighted IR creates a false sense of security. Organizations are solving the problem for the time being, but what about the future? Data breaches are going to happen, and reliance on reactive problem-solving creates a flimsy IR program that leaves an organization vulnerable to threats. ... Knowledge-sharing is the best way to go about this. Sharing key learnings from previous attacks is how these teams can grow and prevent future disasters. The problem is that while plenty of engineers agree they learn the most when something “breaks” and that incidents are a treasure trove of knowledge for security teams, these conversations are often restricted to need-to-know channels. Openness about incidents is the only way to really teach teams how to address them.

Daly Tech Digest - August 20, 2025


Quote for the day:

"Real difficulties can be overcome; it is only the imaginary ones that are unconquerable." -- Theodore N. Vail


Asian Orgs Shift Cybersecurity Requirements to Suppliers

Cybersecurity audits need to move away from a yearly or quarterly exercise to continuous evaluation, says Security Scorecard's Cobb. As part of that, organizations should look to work with their suppliers to build a relationship that can help both companies be more resilient, he says. "Maybe you do an on-site visit or maybe you do a specific evidence gathering with that supplier, especially if they're a critical supplier based on their grade," Cobb says. "That security rating is a great first step for assessment, and it also will lead into further discussions with that supplier around what things can you do better." And yes, artificial intelligence (AI) is making inroads into monitoring third-party risk profiles as well. Consultancy EY imagines a future where multiple automated agents track information about suppliers and when an event — whether cyber, geopolitical, or meteorological — affects one or more supply chains, will automatically develop plans to mitigate the risk. Pointing out the repeated supply chain shocks from the pandemic, geopolitics, and climate change, EY argues that an automated system is necessary to keep up. When a chemical spill or a cybersecurity breach affects a supplier in Southeast Asia, for example, the system would track the news, predict the impact on a company's supply, and suggest alternate sources, if needed, the EY report stated.


The successes and challenges of AI agents

To really get the benefits, businesses will need to redesign the way work is done. The agent should be placed at the center of the task, with people stepping in only when human judgment is required. There is also the issue of trust. If the agent is only giving suggestions, a person can check the results. But when the agent acts directly, the risks are higher. This is where safety rules, testing systems, and clear records become important. Right now, these systems are still being built. One unexpected problem is that agents often think they are done when they are not. Humans know when a task is finished. Agents sometimes miss that. ... Today, the real barrier goes beyond just technology. It is also how people think about agents. Some overestimate what they can do; others are hesitant to try them. The truth lies in the middle. Agents are strong with goal-based and repeatable tasks. They are not ready to replace deep human thinking yet. ... Still, the direction is clear. In the next two years, agents will become normal in customer support and software development. Writing code, checking it, and merging it will become faster. Agents will handle more of these steps with less need for back-and-forth. As this grows, companies may create new roles to manage agents, needing someone to track how they are used, make sure they follow rules, and measure how much value they bring. This role could be as common as a data officer in the future.


How To Prepare Your Platform For Agentic Commerce

APIs and MCP servers are inherently more agent-friendly but less ubiquitous than websites. They expose services in a structured, scalable way that's perfect for agent consumption. The tradeoff is that you must find a way to allow verified agents to get access to your APIs. This is where some payment processing protocols can help by allowing verified agents to get access credentials that leverage your existing authentication, rate-limiting and abuse-prevention mechanisms to ensure access doesn’t lead to spam or scraping. In many cases, the best path is a hybrid approach: Expand your existing website to allow agent-compatible access and checkout while building key capabilities for agent access via APIs or MCP servers. ... Agents work best with standardized checkouts instead of needing to dodge botblockers and captchas while filling out forms via screenscraping. They need an entirely programmatic checkout process. That means you must move beyond more brittle browser autofill and instead accept tokenized payments directly via API. These tokens can carry pre-authorized payment methods such as tokenized credit cards, digital wallets (e.g., Apple Pay and PayPal), stablecoins or on-chain assets and account-to-account transfers. When combined with identity tokens, these payment tokens allow agents to present a complete, scoped credential that you can inspect and charge instantly. Think Stripe Checkout but for AI.


AI agents alone can’t be trusted in verification

One of the biggest risks comes from what’s known as compounding errors. Even a very accurate AI system – for example, 95% – becomes far less reliable when it’s chained to a series of compounding and related decisions. By the fifth hypothetical step, accuracy would drop to 77% or less. Unlike human teams, these systems don’t raise flags or signal uncertainty. That’s what makes them so risky: when they fail, they tend to do so silently and exponentially. ... This opacity is particularly dangerous in the fight against fraud, which is only getting more advanced. In 2025, fraudsters aren’t using fake passports and bad Photoshop. They’re using AI-generated identities, videos, and documents that are nearly impossible to distinguish from the real thing. Tools like Google’s Veo 3 or open-source image generators allow anyone to produce high-quality synthetic content at scale. ... Responsible and effective use of AI means using multiple models to cross-check results to avoid the domino effect of one error feeding into the next. It means assigning human reviewers to the most sensitive or high-risk cases – especially when fraud tactics evolve faster than models can be retrained. And it means having clear escalation procedures and full audit trails that can stand up to regulatory scrutiny. This hybrid model offers the best of both worlds: the speed and scale of AI, combined with the judgment and flexibility of human experts. As fraud becomes more sophisticated, this balance will be essential. 


AI in the classroom is important for real-world skills, college professors say

The agents can flag unsupported claims in students’ writing and explain why evidence is needed and recommend the use of credible sources, Luke Behnke, vice president of product management at Grammarly, said in an interview. “Colleges recognize it’s their responsibility to prepare students for the workforce, and that now includes AI literacy,” Behnke said. Universities are also implementing AI in their own learning management systems and providing students and staff access to Google’s Gemini, Microsoft’s Copilot and OpenAI’s ChatGPT. ... Cuo asks students not to simply accept whatever results advanced genAI models spit out, as they may be riddled with factual errors and hallucinations. “Students need to select and read more by themselves to create something that people don’t recognize as an AI product,” Cuo said. Some professors are trying to mitigate AI use by altering coursework and assignments, while others prefer not to use it at all, said Paul Shovlin, an assistant professor of AI and digital rhetoric at Ohio University. But students have different requirements and use AI tools for personalized learning, collaboration, and writing, as well as for coursework workflow, Shovlin said. He stressed, however, that ethical considerations, rhetorical awareness, and transparency remain important in demonstrating appropriate use.


Automation Alert Sounds as Certificates Set to Expire Faster

Decreasing the validity time for a certificate offers multiple benefits. As previous certificate revocations have demonstrated, actually revoking every bad certificate in a timely manner, across the broad ecosystem, is a challenge. Having certificates simply expire more frequently helps address that. The CA/Browser Forum also expects an ancillary benefit of "increased consistency of quality, stability and availability of certificate lifecycle management components which enable automated issuance, replacement and rotation of certificates." While such automation won't fix every ill, the forum said that "it certainly helps." ... When it comes to getting the so-called cryptographic agility needed to manage both of those requirements, many organizations say they're not yet there. "While awareness is high, execution is lagging," says a new study from market researcher Omdia. "Many organizations know they need to act but lack clear roadmaps or the internal alignment to do so." ... For managing the much shorter certificate renewal timeframe, only 19% of surveyed organizations say they're "very prepared," with 40% saying they're somewhat prepared and another 40% saying they're not very prepared, and so far continue to rely on manual processes. "Historically, organizations have been able to get by with poor certificate hygiene because cryptography was largely static," said Tim Callan


AI Data Centers Are Coming for Your Land, Water and Power

"Think of them as AI factories." But as data centers grow in size and number, often drastically changing the landscape around them, questions are looming: What are the impacts on the neighborhoods and towns where they're being built? Do they help the local economy or put a dangerous strain on the electric grid and the environment? ... As fast as the AI companies are moving, they want to be able to move even faster. Smith, in that Commerce Committee hearing, lamented that the US government needed to "streamline the federal permitting process to accelerate growth." ... Even as big tech companies invest heavily in AI, they also continue to promote their sustainability goals. Amazon, for example, aims to reach net-zero carbon emissions by 2040. Google has the same goal but states it plans to reach it 10 years earlier, by 2030. With AI's rapid advancement, experts no longer know if those climate goals are attainable, and carbon emissions are still rising. "Wanting to grow your AI at that speed and at the same time meet your climate goals are not compatible," Good says. For its Louisiana data center, Meta has "pledged to match its electricity use with 100% clean and renewable energy" and plans to "restore more water than it consumes," the Louisiana Economic Development statement reads.


Slow and Steady Security: Lessons from the Tortoise and the Hare

In security, it seems that we are constantly confronted by the next shiny object, item du jour, and/or overhyped topic. Along with this seems to come an endless supply of “experts” ready to instill fear in us around the “revolutionized threat landscape” and the “new reality” we apparently now find ourselves in and must come to terms with. Indeed, there is certainly no shortage of distractions in our field. Some of us are likely aware of and conscious of the near-constant tendency for distraction in our field. So how can we avoid falling into the trap of succumbing to the temptation and running after every distraction that comes along? Or, to pose it another way, how can we appropriately invest our time and resources in areas where we are likely to see value and return on that investment? ... All successful security teams are governed by a solid security strategy. While the strategy can be adjusted from time to time as risks and threats evolve, it shouldn’t drift wildly and certainly not in an instant. If the newest thing demands radically altering the security strategy, it’s an indicator that it may be overblown. The good news is that a well-formed security strategy can be adapted to deal with just about anything new that arises in a steady and systematic way, provided that new thing is real.


IBM and Google say scalable quantum computers could arrive this decade

Most notable advances come from qubits built with superconducting circuits, as used in IBM and Google machines. These systems must operate near absolute zero and are notoriously hard to control. Other approaches use trapped ions, neutral atoms, or photons as qubits. While these approaches offer greater inherent stability, scaling up and integrating large numbers of qubits remains a formidable practical challenge. "The costs and technical challenges of trying to scale will probably show which are more practical," said Sebastian Weidt, chief executive at Universal Quantum, a startup developing trapped ions. Weidt emphasized that government support in the coming years could play a decisive role in determining which quantum technologies prove viable, ultimately limiting the field to a handful of companies capable of bringing a system to full scale. Widespread interest in quantum computing is attracting attention from both investors and government agencies. ... These next-generation technologies are still in their early stages, though proponents argue they could eventually surpass today's quantum machines. For now, industry leaders continue refining and scaling legacy architectures developed over years of lab research.


The 6 challenges your business will face in implementing MLSecOps

ML models are often “black boxes”, even to their creators, so there’s little visibility into how they arrive at answers. For security pros, this means limited ability to audit or verify behavior – traditionally a key aspect of cybersecurity. There are ways to circumnavigate this opacity of AI and ML systems: with Trusted Execution Environments (TEEs). These are secure enclaves in which organizations can test models repeatedly in a controlled ecosystem, creating attestation data. ... Models are not static and are shaped by the data they ingest. Thus, data poisoning is a constant threat for ML models that need to be retrained. Organizations must embed automated checks into the training process to enforce a continuously secure pipeline of data. Using information from the TEE and guidelines on how models should behave, AI and ML models can be assessed for integrity and accuracy each time they are given new information. ... Risk assessment frameworks that work for traditional software will not be applicable to the changeable nature of AI and ML programs. Traditional assessments fail to account for tradeoffs specific to ML, e.g., accuracy vs fairness, security vs explainability, or transparency vs efficiency. To navigate this difficulty, businesses must be evaluating models on a case-by-case basis, looking to their mission, use case and context to weigh their risks. 

Daily Tech Digest - July 23, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson


AI in customer communication: the opportunities and risks SMBs can’t ignore

To build consumer trust, businesses must demonstrate that AI genuinely improves the customer experience, especially by enhancing the quality, relevance and reliability of communication. With concerns around data misuse and inaccuracy, businesses need to clearly explain how AI supports secure, accurate and personalized interactions, not just internally but in ways customers can understand and see. AI should be positioned as an enabler of human service, taking care of routine tasks so employees can focus on complex, sensitive or high-value customer needs. A key part of gaining long-term trust is transparency around data. Businesses must clearly communicate how customer information is handled securely and show that AI is being used responsibly and with care. This could include clearly labelling AI-generated communications such as emails or text messages, or proactively informing customers about what data is being used and for what purpose.  ... As conversations move beyond why AI should be used to how it must be used responsibly and effectively, companies have entered a make-or-break “audition phase” for AI. In customer communications, businesses can no longer afford to just talk about AI’s benefits, they must prove them by demonstrating how it enhances quality, security, and personalization.


The Expiring Trust Model: CISOs Must Rethink PKI in the Era of Short-Lived Certificates and Machine Identity

While the risk associated with certificates applies to all companies, it is a greater challenge for businesses operating in regulated sectors such as healthcare, where certificates must often be tied to national digital identity systems. In several countries, healthcare providers and services are now required to issue certificates bound to a National Health Identifier (NHI). These certificates are used for authentication, e-signature and encryption in health data exchanges and must adhere to complex issuance workflows, usage constraints and revocation processes mandated by government frameworks. Managing these certificates alongside public TLS certificates introduces operational complexity that few legacy PKI solutions were designed to handle in today’s dynamic and cloud-native environments. ... The urgency of this mandate is heightened by the impending cryptographic shift driven by the rise of quantum computing. Transitioning to post-quantum cryptography (PQC) will require organizations to implement new algorithms quickly and securely. Frequent certificate renewal cycles, which once seemed a burden, could now become a strategic advantage. When managed through automated and agile certificate lifecycle management, these renewals provide the flexibility to rapidly replace compromised keys, rotate certificate authorities or deploy quantum-safe algorithms as they become standardized.


The CISO code of conduct: Ditch the ego, lead for real

The problem doesn’t stop at vendor interactions. It shows up inside their teams, too. Many CISOs don’t build leadership pipelines; they build echo chambers. They hire people who won’t challenge them. They micromanage strategy. They hoard influence. And they act surprised when innovation dries up or when great people leave. As Jadee Hanson, CISO at Vanta, put it, “Ego builds walls. True leadership builds trust. The best CISOs know the difference.” That distinction matters, especially when your team’s success depends on your ability to listen, adapt, and share the stage. ... Security isn’t just a technical function anymore. It’s a leadership discipline. And that means we need more than frameworks and certifications; we need a shared understanding of how CISOs should show up. Internally, externally, in boardrooms, and in the broader community. That’s why I’m publishing this. Not because I have all the answers, but because the profession needs a new baseline. A new set of expectations. A standard we can hold ourselves, and each other, to. Not about compliance. About conduct. About how we lead. What follows is the CISO Code of Conduct. It’s not a checklist, but a mindset. If you recognize yourself in it, good. If you don’t, maybe it’s time to ask why. Either way, this is the bar. Let’s hold it. ... A lot of people in this space are trying to do the right thing. But there are also a lot of people hiding behind a title.


Phishing simulations: What works and what doesn’t

Researchers conducted a study on the real-world effectiveness of common phishing training methods. They found that the absolute difference in failure rates between trained and untrained users was small across various types of training content. However, we should take this with caution, as the study was conducted within a single healthcare organization and focused only on click rates as the measure of success or failure. It doesn’t capture the full picture. Matt Linton, Google’s security manager, said phishing tests are outdated and often cause more frustration among employees than actually improving their security habits. ... For any training program to work, you first need to understand your organization’s risk. Which employees are most at risk? What do they already know about phishing? Next, work closely with your IT or security teams to create phishing tests that match current threats. Tell employees what to expect. Explain why these tests matter and how they help stop problems. Don’t play the blame game. If someone fails a test, treat it as a chance to learn, not to punish. When you do this, employees are less likely to hide mistakes or avoid reporting phishing emails. When picking a vendor, focus on content and realistic simulations. The system should be easy to use and provide helpful reports.


Reclaiming Control: How Enterprises Can Fix Broken Security Operations

Asset management is critical to the success of the security operations function. In order to properly defend assets, I first and foremost need to know about them and be able to manage them. This includes applying policies, controls, and being able to identify assets and their locations when necessary, of course. With the move to hybrid and multi-cloud, asset management is much more difficult than it used to be. ... Visibility enables another key component of security operations – telemetry collection. Without the proper logging, eventing, and alerting, I can’t detect, investigate, analyze, respond to, and mitigate security incidents. Security operations simply cannot operate without telemetry, and the hybrid and multi-cloud world has made telemetry collection much more difficult than it used to be. ... If a security incident is serious enough, there will need to be a formal incident response. This will involve significant planning, coordination with a variety of stakeholders, regular communications, structured reporting, ongoing analysis, and a post-incident evaluation once the response is wrapped up. All of these steps are complicated by hybrid and multi-cloud environments, if not made impossible altogether. The security operations team will not be able to properly engage in incident response if they are lacking the above capabilities, and having a complex environment is not an excuse.


Legacy No More: How Generative AI is Powering the Next Wave of Application Modernization in India

Choosing the right approach to modernise your legacy systems is a task. Generative AI helps overcome the challenges faced in legacy systems and accelerates modernization. For example, it can be used to understand how legacy systems function through detailed business requirements. The resulting documents can be used to build new systems on the cloud in the second phase. This can make the process cheaper, too, and thus easier to get business cases approved. Additionally, generative AI can help create training documents for the current system if the organization wants to continue using its mainframes. In one example, generative AI might turn business models into microservices, API contracts, and database schemas ready for cloud-native inclusion. ... You need to have a holistic assessment of your existing system to implement generative AI effectively. Leaders must assess obsolete modules, interdependencies, data schemas, and throughput constraints to pinpoint high-impact targets and establish concrete modernization goals. Revamping legacy applications with generative AI starts with a clear understanding of the existing system. Organizations must conduct a thorough evaluation, mapping performance bottlenecks, obsolete modules, entanglements, and intricacies of the data flow, to create a modernization roadmap.


A Changing of the Guard in DevOps

Asimov, a newcomer in the space, is taking a novel approach — but addressing a challenge that’s as old as DevOps itself. According to the article, the team behind Asimov has zeroed in on a major time sink for developers: The cognitive load of understanding deployment environments and platform intricacies. ... What makes Asimov stand out is not just its AI capability but its user-centric focus. This isn’t another auto-coder. This is about easing the mental burden, helping engineers think less about YAML files and more about solving business problems. It’s a fresh coat of paint on a house we’ve been renovating for over a decade. ... Whether it’s a new player like Asimov or stalwarts like GitLab and Harness, the pattern is clear: AI is being applied to the same fundamental problems that have shaped DevOps from the beginning. The goals haven’t changed — faster cycles, fewer errors, happier teams — but the tools are evolving. Sure, there’s some real innovation here. Asimov’s knowledge-centric approach feels genuinely new. GitLab’s AI agents offer a logical evolution of their existing ecosystem. Harness’s plain-language chat interface lowers the barrier to entry. These aren’t just gimmicks. But the bigger story is the convergence. AI is no longer an outlier or an optional add-on — it’s becoming foundational. And as these solutions mature, we’re likely to see less hype and more impact.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. ... Cyber resilience is now essential. With ransomware that can encrypt systems in minutes, the ability to recover quickly and effectively is a business imperative. Therefore, companies must develop an adaptive, layered strategy that evolves with emerging threats and aligns with their unique environment, infrastructure and risk tolerance. To effectively prepare for the next threat, technology leaders must balance technical sophistication with operational discipline as the best defence is not solely a hardened perimeter, it’s also having a recovery plan that works. Today, companies cannot afford to choose between data protection and cyber resilience, they must master both.


Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

The findings challenge the prevailing industry wisdom that more computational resources devoted to reasoning will consistently improve AI performance. Major AI companies have invested heavily in “test-time compute” — allowing models more processing time to work through complex problems — as a key strategy for enhancing capabilities. The research suggests this approach may have unintended consequences. “While test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns,” the authors conclude. For enterprise decision-makers, the implications are significant. Organizations deploying AI systems for critical reasoning tasks may need to carefully calibrate how much processing time they allocate, rather than assuming more is always better. ... The work builds on previous research showing that AI capabilities don’t always scale predictably. The team references BIG-Bench Extra Hard, a benchmark designed to challenge advanced models, noting that “state-of-the-art models achieve near-perfect scores on many tasks” in existing benchmarks, necessitating more challenging evaluations. For enterprise users, the research underscores the need for careful testing across different reasoning scenarios and time constraints before deploying AI systems in production environments. 


How to Advance from SOC Manager to CISO?

Strategic thinking demands a firm grip on the organization's core operations, particularly how it generates revenue and its key value streams. This perspective allows security professionals to align their efforts with business objectives, rather than operating in isolation. ... This is related to strategic thinking but emphasizes knowledge of risk management and finance. Security leaders must factor in financial impacts to justify security investments and manage risks effectively. Balancing security measures with user experience and system availability is another critical aspect. If security policies are too strict, productivity can suffer; if they're too permissive, the company can be exposed to threats. ... Effective communication is vital for translating technical details into language senior stakeholders can grasp and act upon. This means avoiding jargon and abbreviations to convey information in a simplistic manner that resonates with multiple stakeholders, including executives who may not have a deep technical background. Communicating the impact of security initiatives in clear, concise language ensures decisions are well-informed and support company goals. ... You will have to ensure technical services meet business requirements, particularly in managing service delivery, implementing change, and resolving issues. All of this is essential for a secure and efficient IT infrastructure.

Daily Tech Digest - August 01, 2024

These are the skills you need to get hired in tech

While soft skills are important, communicating them to a prospective employer can present a conundrum. Tina Wang, division vice president of human resources at ADP, said there are a few ways for job seekers to bring attention to their behavioral skills. It goes beyond just listing “strong work ethic” or “problem solving” on a resume, “though it’s good to add it there too,” she said. Job seekers can incorporate behavior skills in a track record of job experiences. ... An interview with a prospective employer is also a good time to introduce behavioral skills, but time is limited and job-seekers won’t likely be able to share all their demonstrated skills and experience. “Preparation will go a long way, so think through your talking points and what is important to share,” Wang said. “Think about a few applicable, real work experiences where you demonstrated these skills and sketch out how and when to bring them during the interview process.” References can also be an excellent way to highlight behavioral skills. Intangibles such as a strong work ethic or attention to detail might be something former managers, team members or peers identify. 


Ideal authentication solution boils down to using best tools to stop attacks

Given the shifting nature of work, with more employees working remotely, the variety of gaps in protection is manifold. Clunky authentication experiences mean users are often asked to sign in multiple times a day for different applications and accounts. “Users get extremely frustrated when this occurs, and they end up having resistance to adopting these authentication methods,” Anderson says. To improve the situation, organizations need to manage authentication scenarios in onboarding, session tokens to remember login – and the reality of username and password authentication still being used extensively throughout the security landscape, leaving vulnerabilities to fraud. “Passkeys are good for users because they simplify and streamline the actual authentication ceremony itself, where the user is actively involved,” Miller says. “It doesn’t necessarily decrease the number of times they have to authenticate but it does make it simpler and less taxing.” “They also have knock-on benefits of reducing the amount of information that leaks in the case of a database leak that can be used by an attacker. It shrinks the blast radius of account compromise.”


Should Today’s Developers Be More or Less Specialized?

“The need for specialists is not going to change. If anything, I expect it to increase,” says Hillion. “We still have a number of clients who rely on full-stack developers. I would say the general trend is towards businesses needing more specialized developers who have the right combination of technical skillsets and sector knowledge to deliver what is needed into the complex tech stack. There is significant demand for developers who specialize in particular industry sectors.” ... “Without basic knowledge, pursuing any specific development area is challenging,” says Ivanov. “That’s why starting by mastering basic technologies that someone is most proficient in, which helps them learn new things faster,” says Ivanov in an email interview. “However, core technologies should not be the end goal. It is also essential to stay up to date with technology trends and always continue using new technology.” Tasks that go beyond standard or general requirements need the involvement of specialists who have knowledge and experience in specific areas. For example, a project that requires complex algorithms or specific technologies will require a specialist with a deep understanding of them.


Between sustainability and risk: why CIOs are considering small language models

“In LLMs, the bulk of the data work is done statistically and then IT trains the model on specific topics to correct errors, giving it targeted quality data,” he says. “SLMs cost much less and require less data, but, precisely for this reason, the statistical calculation is less effective and, therefore, very high-quality data is needed, with substantial work by data scientists. Otherwise, with generic data, the model risks producing many errors.” Furthermore, SLMs are so promising and interesting for companies that even big tech offers and advertises them, like Google’s Gemma and Microsoft’s Phi-3. For this reason, according to Esposito, governance remains fundamental, within a model that should remain a closed system. “An SLM is easier to manage and becomes an important asset for the company in order to extract added value from AI,” he says. “Otherwise, with large models and open systems, you have to agree to share strategic company information with Google, Microsoft, and OpenAI. This is why I prefer to work with a system integrator that can develop customizations and provide a closed system, for internal use. 


Why geographical diversity is critical to build effective and safe AI tools

Geographical diversity is critical as organizations look to develop AI tools that can be adopted worldwide, according to Andrea Phua, senior director of the national AI group and director of the digital economy office at Singapore's Ministry of Digital Development and Information (MDDI). ... "The use of Gen AI has brought a new dimension to cyber threats. As AI becomes more accessible and sophisticated, threat actors will also become better at exploiting it," said CSA's chief executive and Commissioner of Cybersecurity David Koh. "As it is, AI already poses a formidable challenge for governments around the world [and] cybersecurity professionals would know that we are merely scratching the surface of gen AI's potential, both for legitimate applications and malicious uses," Koh said. He pointed to reports of AI-generated content, including deepfakes in video clips and memes, that have been used to sow discord and influence the outcome of national elections. At the same time, there are new opportunities for AI to be tapped to enhance cyber resilience and defense, he said. 


Cloud Migration Regrets: Should You Repatriate?

With increasing pressure to cut costs, many CTOs and CIOs are considering repatriating cloud workloads back on premises. As hard as it may seem, it’s important to think beyond just the cost. You must understand workload requirements to make sound decisions for each application. ... A lot of organizations have forgotten how much IT operations have changed since moving to the cloud. Cloud transformation meant revamping ITOps based on the chosen mix of Infrastructure-, Platform- or Software-as-a-Service (IaaS, PaaS or SaaS) services. Bringing applications back on premises strips away those service layers, and Ops teams may no longer be able or willing to accept the administrative and maintenance burden again. One final consideration before moving workloads off the cloud is security. I think security is one of the many advantages of cloud infrastructure. When businesses first started moving to the cloud, security was one of the biggest concerns. It turns out that cloud providers are better at security than you are. They can’t fix security holes in your software or other operator error scenarios, but a cloud infrastructure provides greater isolation if a breach does occur. 


Chess, AI & future of leadership

As computing power increases and its access cost reduces, AI will become the central force that drives all activities, including imagination! So, imagine the chessboard being AI-enabled. The board now has its intelligence with the ability to understand the context of the game to prompt the next set of moves. The difference between the board-level AI and the AI used by the player as her assistant is that the assistant knows the player’s psyche of defending or attacking, strengths and weaknesses of the player and her opponent, and factors these while offering suggestions. The two AIs may or may not be aligned in their suggestions since both may be accessing different references. Let’s activate the third dimension in chess – the pieces are also intelligent! They know their roles and those of the others. They too can think, strategise, and suggest. For instance, in a choice to move between the rook and the knight, the rook suggests the knight moves. The knight feels the Queen should move! This is the egalitarian version of chess! Does it feel real and practical? In the context of AI, there’s the Large Language Model, which processes data from a vast set of sources with a large number of constraints and rules. 


DigiCert validation bug sets up 83,267 SSL certs for revoking

One of the validation methods approved by the Certification Authority Browser Forum (CABF), whose guidelines provide best practices for securing internet transactions in browsers and other software, involves the customer adding a DNS CNAME record that includes a random value supplied by its certificate provider. The provider, in this case DigiCert, then does a DNS lookup and verifies that the random value is as provided, confirming that the customer controls the domain. The CABF requires that, in one format of the DNS CNAME entry, the random value be prefixed with an underscore, and DigiCert discovered that, in some cases, that character was not included, rendering the validation non-compliant. By CABF rules, those certificates must be revoked within 24 hours, with no exceptions. However, DigiCert said in an update to its status page Tuesday, and in an email to customers, “Unfortunately, some customers operating critical infrastructure are not in a position to have all their certificates reissued and deployed in time without critical service interruptions. To avoid disruption to critical services, we have engaged with browser representatives alongside these customers over the last several hours. ...”


Mind the Gap: Data Quality Is Not “Fit for Purpose”

When talking about data quality, we must therefore be clear about whose purpose, what requirements, established when, and by whom. Within the context of the DMBoK definition, the answer is that every consumer evaluates the quality of a data set independently. Data is considered to be of high quality when it is fit for my purpose, satisfies my requirements, established by me when I need the data. Data quality, defined in this way, is truly in the eye of the beholder. Furthermore, data quality analyses cannot be leveraged by new consumers. For decades, we in decision support have been selling the benefits of leveraging data across applications and analyses. It has been the fundamental justification for data warehouses, data lakes, data lakehouses, etc. But misalignment between the purpose for which data was created and the purpose for which it is being used may not be immediately apparent. Especially when the data is not well understood. The consequences are faulty models and erroneous analyses. We reflexively blame the quality of the data, but that’s not where the problem lies. This is not data quality. It is data fitness. 


Navigating Hope and Fear in a Socio-Technical Future

It is not about just spending more, that isnt really working, you must SPEND BETTER. I and other architects litterally train for decades to both cut costs and make great investment decisions. Technical debt acrual, technical health goals, technical strategy dont just deserve a seat at the table. They are becoming the table. A little more rationally, in all complex engineering fields, we are required to get signoff from legitimate professionals who have been measured against legitimate and hard-earned competencies. Not only does this create more stable outcomes, it actually saves and makes the economy money. Instead of ‘paying for two ok systems’, we pay for ‘one great one’. ... In all complex engineering ecosystems it is not just outputs and companies that are regulated. The role and skills of architects and engineers are not secret and they really aren’t that different by company. I believe I am the worlds expert on architecture skills or at least one of a dozen of them. I have interviewed and assessed hundreds of companies, and thousands of architects. It is time to begin licensing. And it must be handed to a real professional society. It cannot be a vendor consortium. 



Quote for the day:

"You’ll never achieve real success unless you like what you’re doing." -- Dale Carnegie