Showing posts with label multi-cloud. Show all posts
Showing posts with label multi-cloud. Show all posts

Daily Tech Digest - September 11, 2025


Quote for the day:

"You live longer once you realize that any time spent being unhappy is wasted." -- Ruth E. Renkl



Six hard truths for software development bosses

Everyone behaves differently when the boss is around. Everyone. And you, as a boss, need to realize this. There are two things to realize here. Firstly, when you are present, people will change who they are and what they say. Secondly, you should consider that fact when deciding whether to be in the room. ... Bosses need to realize that what they say, even comments that you might think are flippant and not meant to be taken seriously, will be taken seriously. ... The other side of that coin is that your silence and non-action can have profound effects. Maybe you space out in a meeting and miss a question. The team might think you blew them off and left the great idea hanging. Maybe you forgot to answer an email. Maybe you had bigger fish to fry and you were a bit short and dismissive of an approach by a direct report. Small lapses can be easily misconstrued by your team. ... You are the boss. You have the power to promote, demote, and award raises and bonuses. These powers are important, and people will see you in that light. Even your best attempts at being cordial, friendly, and collegial will not overcome the slight apprehension your authority will engender. Your mood on any given day will be noticed and tracked. ... You can and should have input into technical decisions and design decisions, but your team will want to be the ones driving what direction things take and how things get done. 


AI prompt injection gets real — with macros the latest hidden threat

“Broadly speaking, this threat vector — ‘malicious prompts embedded in macros’ — is yet another prompt injection method,” Roberto Enea, lead data scientist at cybersecurity services firm Fortra, told CSO. “In this specific case, the injection is done inside document macros or VBA [Visual Basic for Applications] scripts and is aimed at AI systems that analyze files.” Enea added: “Typically, the end goal is to mislead the AI system into classifying malware as safe.” ... “Attackers could embed hidden instructions in common business files like emails or Word documents, and when Copilot processed the file, it executed those instructions automatically,” Quentin Rhoads-Herrera, VP of cybersecurity services at Stratascale, explained. In response to the vulnerability, Microsoft recommended patching, restricting Copilot access, stripping hidden metadata from shared files, and enabling its built-in AI security controls. ... “We’ve already seen proof-of-concept attacks where malicious prompts are hidden inside documents, macros, or configuration files to trick AI systems into exfiltrating data or executing unintended actions,” Stratascale’s Rhoads-Herrera commented. “Researchers have also demonstrated how LLMs can be misled through hidden instructions in code comments or metadata, showing the same principle at work.” Rhoads-Herrera added: “While some of these remain research-driven, the techniques are quickly moving into the hands of attackers who are skilled at weaponizing proof-of-concepts.”


Are you really ready for AI? Exposing shadow tools in your organisation

When an organisation doesn’t regulate an approved framework of AI tools in place, its employees will commonly turn to using these applications across everyday actions. By now, everyone is aware of the existence of generative AI assets, whether they are actively using them or not, but without a proper ruleset in place, everyday employee actions can quickly become security nightmares. This can be everything from employees pasting sensitive client information or proprietary code into public generative AI tools to developers downloading promising open-source models from unverified repositories. ... The root cause of turning to shadow AI isn’t malicious intent. Unlike cyber actors, aiming to disrupt and exploit business infrastructure weaknesses for a hefty payout, employees aren’t leaking data outside of your organisation intentionally. AI is simply an accessible, powerful tool that many find exciting. In the absence of clear policies, training and oversight, and the increased pressure of faster, greater delivery, people will naturally seek the most effective support to get the job done. ... Regardless, you cannot protect against what you can’t see. Tools like Data Loss Prevention (DLP) and Cloud Access Security Brokers (CASB), which detect unauthorised AI use, must be an essential part of your security monitoring toolkit. Ensuring these alerts connect directly to your SIEM and defining clear processes for escalation and correction are also key for maximum security.


How to error-proof your team’s emergency communications

Hierarchy paralysis occurs when critical information is withheld by junior staff due to the belief that speaking up may undermine the chain of command. Junior operators may notice an anomaly or suspect a procedure is incorrect, but often neglect to disclose their concerns until after a mistake has happened. They may assume their input will be dismissed or even met with backlash due to their position. In many cases, their default stance is to believe that senior staff are acting on insight that they themselves lack. CRM trains employees to follow a structured verbal escalation path during critical incidents. Similar to emergency operations procedures (EOPs), staff are taught to express their concerns using short, direct phrases. This approach helps newer employees focus on the issue itself rather than navigating the interaction’s social aspects — an area that can lead to cognitive overload or delayed action. In such scenarios, CRM recommends the “2-challenge rule”: team members should attempt to communicate an observed issue twice, and if the issue remains unaddressed, escalate it to upper management. ... Strengthening emergency protocols can help eliminate miscommunication between employees and departments. Owners and operators can adopt strategies from other mission-critical industries to reduce human error and improve team responsiveness. While interpersonal issues between departments and individuals in different roles are inevitable, tighter emergency procedures can ensure consistency and more predictable team behavior.


SpamGPT – AI-powered Attack Tool Used By Hackers For Massive Phishing Attack

SpamGPT’s dark-themed user interface provides a comprehensive dashboard for managing criminal campaigns. It includes modules for setting up SMTP/IMAP servers, testing email deliverability, and analyzing campaign results features typically found in Fortune 500 marketing tools but repurposed for cybercrime. The platform gives attackers real-time, agentless monitoring dashboards that provide immediate feedback on email delivery and engagement. ... Attackers no longer need strong writing skills; they can simply prompt the AI to create scam templates for them. The toolkit’s emphasis on scale is equally concerning, as it promises guaranteed inbox delivery to popular providers like Gmail, Outlook, and Microsoft 365 by abusing trusted cloud services such as Amazon AWS and SendGrid to mask its malicious traffic. ... What once required significant technical expertise can now be executed by a single operator with a ready-made toolkit. The rise of such AI-driven platforms signals a new evolution in cybercrime, where automation and intelligent content generation make attacks more scalable, convincing, and difficult to detect. To counter this emerging threat, organizations must harden their email defenses. Enforcing strong email authentication protocols such as DMARC, SPF, and DKIM is a critical first step to make domain spoofing more difficult. Furthermore, enterprises should deploy AI-powered email security solutions capable of detecting the subtle linguistic patterns and technical signatures of AI-generated phishing content.


How attackers weaponize communications networks

The most attractive targets for advanced threat actors are not endpoint devices or individual servers, but the foundational communications networks that connect everything. This includes telecommunications providers, ISPs, and the routing infrastructure that forms the internet’s backbone. These networks are a “target-rich environment” because compromising a single point of entry can grant access to a vast amount of data from a multitude of downstream targets. The primary motivation is overwhelmingly geopolitical. We’re seeing a trend of nation-state actors, such as those behind the Salt Typhoon campaign, moving beyond corporate espionage to a more strategic, long-term intelligence-gathering mission. ... Two recent trends are particularly telling and serve as major warning signs. The first is the sheer scale and persistence of these attacks. ... The second trend is the fusion of technical exploits with AI-powered social engineering. ... A key challenge is the lack of a standardized global approach. Differing regulations around data retention, privacy, and incident reporting can create a patchwork of security requirements that threat actors can easily exploit. For a global espionage campaign, a weak link in one country’s regulatory framework can compromise an entire international communications chain. The goal of international policy should be to establish a baseline of security that includes mandatory incident reporting, a unified approach to patching known vulnerabilities, and a focus on building a collective defense.


AI's free web scraping days may be over, thanks to this new licensing protocol

AI companies are capturing as much content as possible from websites while also extracting information. Now, several heavyweight publishers and tech companies -- Reddit, Yahoo, People, O'Reilly Media, Medium, and Ziff Davis (ZDNET's parent company) -- have developed a response: the Really Simple Licensing (RSL) standard. You can think of RSL as Really Simple Syndication's (RSS) younger, tougher brother. While RSS is about syndication, getting your words, stories, and videos out onto the wider web, RSL says: "If you're an AI crawler gobbling up my content, you don't just get to eat for free anymore." The idea behind RSL is brutally simple. Instead of the old robots.txt file -- which only said, "yes, you can crawl me," or "no, you can't," and which AI companies often ignore -- publishers can now add something new: machine-readable licensing terms. Want an attribution? You can demand it. Want payment every time an AI crawler ingests your work, or even every time it spits out an answer powered by your article? Yep, there's a tag for that too. ... It's a clever fix for a complex problem. As Tim O'Reilly, the O'Reilly Media CEO and one of the RSL initiative's high-profile backers, said: "RSS was critical to the internet's evolution…but today, as AI systems absorb and repurpose that same content without permission or compensation, the rules need to evolve. RSL is that evolution."


AI is changing the game for global trade: Nagendra Bandaru, Wipro

AI is revolutionising global supply chain and trade management by enabling businesses across industries to make real-time, intelligent decisions. This transformative shift is driven by the deployment of AI agents, which dynamically respond to changing tariff regimes, logistics constraints, and demand fluctuations. Moving beyond traditional static models, AI agents are helping create more adaptive and responsive supply chains. ... The strategic focus is also evolving. While cost optimisation remains important, AI is now being leveraged to de-risk operations, anticipate geopolitical disruptions, and ensure continuity. In essence, agentic AI is reshaping supply chains into predictive, adaptive ecosystems that align more closely with the complexities of global trade. ... The next frontier is going to be threefold: first, the rise of agentic AI at scale marks a shift from isolated use cases to enterprise-wide deployment of autonomous agents capable of managing end-to-end trade ecosystems; second, the development of sovereign and domain-specific language models is enabling lightweight, highly contextualised solutions that uphold data sovereignty while delivering robust, enterprise-grade outcomes; and third, the convergence of AI with emerging technologies—including blockchain for provenance and quantum computing for optimisation—is poised to redefine global trade dynamics.


5 challenges every multicloud strategy must address

Transferring AI data among various cloud services and providers also adds complexity — but also significant risks. “Tackling software sprawl, especially as organizations accelerate their adoption of AI, is a top action for CIOs and CTOs,” says Mindy Lieberman, CIO at database platform provider MongoDB. ... A multicloud environment can complicate the management of data sovereignty. Companies need to ensure that data remains in line with the laws and regulations of the specific geographic regions where it is stored and processed. ... Deploying even one cloud service can present cybersecurity risks for an enterprise, so having a strong security program in place is all the more vital for a multicloud environment. The risks stem from expanded attack surfaces, inconsistent security practices among service providers, increased complexity of the IT infrastructure, fragmented visibility, and other factors. IT needs to be able to manage user access to cloud services and detect threats across multiple environments — in many cases without even having a full inventory of cloud services. ... “With greater complexity comes more potential avenues of failure, but also more opportunities for customization and optimization,” Wall says. “Each cloud provider offers unique strengths and weaknesses, which means forward-thinking enterprises must know how to leverage the right services at the right time.”


What Makes Small Businesses’ Data Valuable to Cybercriminals?

Small businesses face unique challenges that make them particularly vulnerable. They often lack dedicated IT or cybersecurity teams, sophisticated systems, and enterprise-grade protections. Budget constraints mean many cannot afford enterprise-level cybersecurity solutions, creating easily exploitable gaps. Common issues include outdated software, reduced security measures, and unpatched systems, which weaken defenses and provide easy entry points for criminals. A significant vulnerability is the lack of employee cybersecurity awareness. ... Small businesses, just like large organizations, collect and store vast amounts of valuable data. Customer data represents a goldmine for cybercriminals, including first and last names, home and email addresses, phone numbers, financial information, and even medical information. Financial records are equally attractive targets, including business financial information, payment details, and credit/debit card payment data. Intellectual property and trade secrets represent valuable proprietary assets that can be sold to competitors or used for corporate espionage. ... Small businesses are undeniably attractive targets for cybercriminals, not because they are financial giants, but because they are perceived as easier to breach due to resource constraints and common vulnerabilities. Their data, from customer PII to financial records and intellectual property, is highly valuable for resale, fraud, and as gateways to larger targets.

Daily Tech Digest - September 08, 2025


Quote for the day:

"Let no feeling of discouragement prey upon you, and in the end you are sure to succeed." -- Abraham Lincoln


Coding With AI Assistants: Faster Performance, Bigger Flaws

One challenge comes in the form of how AI coding assistants tend to package their code. Rather than delivering bite-size pieces, they generally deliver larger code pull requests for porting into the main project repository. Apiiro saw AI code assistants deliver three to four times as many code commits - meaning changes to a code repository - than non-AI code assistants, but packaging fewer pull requests. The problem is that larger PRs are inherently riskier and more time-consuming to verify. "Bigger, multi-touch PRs slow review, dilute reviewer attention and raise the odds that a subtle break slips through," said Itay Nussbaum, a product manager at Apiiro. ... At the same time, the tools generated deeper problems, in the form of a 150% increase in architectural flaws and an 300% increase in privilege issues. "These are the kinds of issues scanners miss and reviewers struggle to spot - broken auth flows, insecure designs, systemic weaknesses," Nussbaum said. "In other words, AI is fixing the typos but creating the time bombs." The tools also have a greater tendency to leak cloud credentials. "Our analysis found that AI-assisted developers exposed Azure service principals and storage access keys nearly twice as often as their non-AI peers," Nussbaum said. "Unlike a bug that can be caught in testing, a leaked key is live access: an immediate path into the production cloud infrastructure."


IT Leadership Is More Change Management Than Technical Management

Planning is considered critical in business to keep an organization moving forward in a predictable way, but Mahon doesn’t believe in the traditional annual and long-term planning in which lots of time is invested in creating the perfect plan which is then executed. “Never get too engaged in planning. You have a plan, but it’s pretty broad and open-ended. The North Star is very fuzzy, and it never gets to be a pinpoint [because] you need to focus on all the stuff that's going on around you,” says Mahon. “You should know exactly what you're going to do in the next two to three months. From three to six months out, you have a really good idea what you're going to do but be prepared to change. And from six to nine months or a year, [I wait until] we get three months away before I focus on it because tech and business needs change rapidly.” ... “The good ideas are mostly common knowledge. To be honest, I don’t think there are any good self-help books. Instead, I have a leadership coach who is also my mental health coach,” says Mahon. “Books try to get you to change who you are, and it doesn’t work. Be yourself. I have a leadership coach who points out my flaws, 90% of which I’m already aware of. His philosophy is don’t try to fix the flaw, address the flaw so, for example, I’m mindful about my tendency to speak too directly.”


The Anatomy of SCREAM: A Perfect Storm in EA Cupboard

SCREAM- Situational Chaotic Realities of Enterprise Architecture Management- captures the current state of EA practice, where most organizations, from medium to large complexity, struggle to derive optimal value from investments in enterprise architecture capabilities. It’s the persistent legacy challenges across technology stacks and ecosystems that need to be solved to meet strategic business goals and those moments when sudden, ill-defined executive needs are met with a hasty, reactive sprint, leading to a fractured and ultimately paralyzing effect on the entire organization. ... The paradox is that the very technologies offering solutions to business challenges are also key sources of architectural chaos, further entrenching reactive SCREAM. As noted, the inevitable chaos and fragmentation that emerge from continuous technology additions lead to silos and escalating compatibility issues. ... The chaos of SCREAM is not just an external force; it’s a product of our own making. While we preach alignment to the business, we often get caught up in our own storm in an EA cupboard. How often do we play EA on EA? ... While pockets of recognizable EA wins may exist through effective engagement, a true, repeatable value-add requires a seat at the strategic table. This means “architecture-first” must evolve beyond being a mere buzzword or a token effort, becoming a reliable approach that promotes collaborative success rather than individual credit-grabbing.


How Does Network Security Handle AI?

Detecting when AI models begin to vary and yield unusual results is the province of AI specialists, users and possibly the IT applications staff. But the network group still has a role in uncovering unexpected behavior. That role includes: Properly securing all AI models and data repositories on the network. Continuously monitoring all access points to the data and the AI system. Regularly scanning for network viruses and any other cyber invaders that might be lurking. ... both application and network teams need to ensure strict QA principles across the entire project -- much like network vulnerability testing. Develop as many adversarial prompt tests coming from as many different directions and perspectives as you can. Then try to break the AI system in the same way a perpetrator would. Patch up any holes you find in the process. ... Apply least privilege access to any AI resource on the network and continually monitor network traffic. This philosophy should also apply to those on the AI application side. Constrict the AI model being used to the specific use cases for which it was intended. In this way, the AI resource rejects any prompts not directly related to its purpose. ... Red teaming is ethical hacking. In other words, deploy a team whose goal is to probe and exploit the network in any way it can. The aim is to uncover any network or AI vulnerability before a bad actor does the same.


Lack of board access: The No. 1 factor for CISO dissatisfaction

CISOs who don’t get access to the board are often buried within their organizations. “There are a lot of companies that will hire at a director level or even a senior manager level and call it a CISO. But they don’t have the authority and scope to actually be able to execute what a CISO does,” says Nick Kathmann, CISO at LogicGate. Instead of reporting directly to the board or CEO, these CISOs will report to a CIO, CTO or other executive, despite the problems that can arise in this type of reporting structure. CIOs and CTOs are often tasked with implementing new technology. The CISO’s job is to identity risks and ensure the organization is secure. “If the CIO doesn’t like those risks or doesn’t want to do anything to fix those risks, they’ll essentially suppress them [CISOs] as much as they can,” says Kathmann. ... Getting in front of the board is one thing. Effectively communicating cybersecurity needs and getting them met is another. It starts with forming relationships with C-suite peers. Whether CISOs are still reporting up to another executive or not, they need to understand their peers’ priorities and how cybersecurity can mesh with those. “The CISO job is an executive job. As an executive, you rely completely on your peer relationships. You can’t do anything as an executive in a vacuum,” says Barrack. Working in collaboration, rather than contention, with other executives can prepare CISOs to make the most of their time in front of the board.


From Vault Sprawl to Governance: How Modern DevOps Teams Can Solve the Multi-Cloud Secrets Management Nightmare

Every time an application is updated or a new service is deployed, one or multiple new identities are born. These NHIs include service accounts, CI/CD pipelines, containers, and other machine workloads, the running pieces of software that connect to other resources and systems to do work. Enterprises now commonly see 100 or more NHIs for every single human identity. And that number keeps growing. ... Fixing this problem is possible, but it requires an intentional strategy. The first step is creating a centralized inventory of all secrets. This includes secrets stored in vaults, embedded in code, or left exposed in CI/CD pipelines and environments. Orphaned and outdated secrets should be identified and removed. Next, organizations must shift left. Developers and DevOps teams require tools to detect secrets early, before they are committed to source control or merged into production. Educating teams and embedding detection into the development process significantly reduces accidental leaks. Governance must also include lifecycle mapping. Secrets should be enriched with metadata such as owner, creation date, usage frequency, and last rotation. Automated expiration and renewal policies help enforce consistency and reduce long-term risk. Contributions should be both product- and vendor-agnostic, focusing on market insights and thought leadership.


Digital Public Infrastructure: The backbone of rural financial inclusion

When combined, these infrastructures — UPI for payments, ONDC for commerce, AAs for credit, CSCs for handholding support and broadband for connectivity form a powerful ecosystem. Together, these enable a farmer to sell beyond the village, receive instant payment and leverage that income proof for a micro-loan, all within a seamless digital journey. Adding to this, e-KYC ensures that identity verification is quick, low-cost and paperless, while AePS provides last-mile access to cash and banking services, ensuring inclusion even for those outside the smartphone ecosystem. This integration reduces dependence on middlemen, enhances transparency and fosters entrepreneurship. ...  Of course, progress does not mean perfection. There are challenges that must be addressed with urgency and sensitivity. Many rural merchants hesitate to fully embrace digital commerce due to uncertainties around Goods and Services Tax (GST) compliance. Digital literacy, though improving, still varies widely, particularly among older populations and women. Infrastructure costs such as last-mile broadband and device affordability remain burdensome for small operators. These are not reasons to slow down but opportunities to fine-tune policy. Simplifying tax processes for micro-enterprises, investing in vernacular digital literacy programmes, subsidising rural connectivity and embedding financial education into community touchpoints such as CSCs will be essential to ensure no one is left behind.


Cybersecurity research is getting new ethics rules, here’s what you need to know

Ethics analysis should not be treated as a one-time checklist. Stakeholder concerns can shift as a project develops, and researchers may need to revisit their analysis as they move from design to execution to publication. ...“Stakeholder ethical concerns impact academia, industry, and government,” Kalu said. “Security teams should replace reflexive defensiveness with structured collaboration: recognize good-faith research, provide intake channels and SLAs, support coordinated disclosure and pre-publication briefings, and engage on mitigation timelines. A balanced, invitational posture, rather than an adversarial one, will reduce harm, speed remediation, and encourage researchers to keep working on that project.” ... While the new requirements target academic publishing, the ideas extend to industry practice. Security teams often face similar dilemmas when deciding whether to disclose vulnerabilities, release tools, or adopt new defensive methods. Thinking in terms of stakeholders provides a way to weigh the benefits and risks of those decisions. ... Peng said ethical standards should be understood as “scaffolds that empower thoughtful research,” providing clarity and consistency without blocking exploration of adversarial scenarios. “By building ethics into the process from the start and revisiting it as research develops, we can both protect stakeholders and ensure researchers can study the potential threats that adversaries, who face no such constraints, may exploit,” she said.


From KYC to KYAI: Why ‘Algorithmic Transparency’ is Now Critical in Banking

This growing push for transparency into AI models has introduced a new acronym to the risk and compliance vernacular: KYAI, or "know your AI." Just like finance institutions must know the important details about their customers, so too must they understand the essential components of their AI models. The imperative has evolved beyond simply knowing "who" to "how." Based on my work helping large banks and other financial institutions integrate AI into their KYC workflows over the last few years, I’ve seen what can happen when these teams spend the time vetting their AI models and applying rigorous transparency standards. And, I’ve seen what can happen when they become overly trusting of black-box algorithms that deliver decisions based on opaque methods with no ability to attribute accountability. The latter rarely ever ends up being the cheapest or fastest way to produce meaningful results. ... The evolution from KYC to KYAI is not merely driven by regulatory pressure; it reflects a fundamental shift in how businesses operate today. Financial institutions that invest in AI transparency will be equipped to build greater trust, reduce operational risks, and maintain auditability without missing a step in innovation. The transformation from black box AI to transparent, governable systems represents one of the most significant operational challenges facing financial institutions today.


Why compliance clouds are essential

From a technical perspective, compliance clouds offer something that traditional clouds can’t match, these are the battle-tested security architectures. By implementing them, the organizations can reduce their data breach risk by 30-40% compared to standard cloud deployments. This is because compliance clouds are constantly reviewed and monitored by third-party experts, ensuring that we are not just getting compliance, but getting an enterprise-grade security that’s been validated by some of the most security-conscious organizations in the world. ... What’s particularly interesting is that 58% of this market is software focused. As organizations prioritize automation and efficiency in managing complex regulatory requirements, this number is set to grow further. Over 75% of federal agencies have already shifted to cloud-based software to meet evolving compliance needs. Following this, we at our organizations have also achieved FedRAMP® High Ready compliance for Cloud. ... Cloud compliance solutions deliver far-reaching benefits that extend well beyond regulatory adherence, offering a powerful mix of cost efficiency, trust building, adaptability, and innovation enablement. ... In an era where trust is a competitive currency, compliance cloud certifications serve as strong differentiators, signaling an organization’s unwavering commitment to data protection and regulatory excellence.

Daily Tech Digest - August 02, 2025


Quote for the day:

"Successful leaders see the opportunities in every difficulty rather than the difficulty in every opportunity" -- Reed Markham


Chief AI role gains traction as firms seek to turn pilots into profits

CAIOs understand the strategic importance of their role, with 72% saying their organizations risk falling behind without AI impact measurement. Nevertheless, 68% said they initiate AI projects even if they can’t assess their impact, acknowledging that the most promising AI opportunities are often the most difficult to measure. Also, some of the most difficult AI-related tasks an organization must tackle rated low on CAIOs’ priority lists, including measuring the success of AI investments, obtaining funding and ensuring compliance with AI ethics and governance. The study’s authors didn’t suggest a reason for this disconnect. ... Though CEO sponsorship is critical, the authors also stressed the importance of close collaboration across the C-suite. Chief operating officers need to redesign workflows to integrate AI into operations while managing risk and ensuring quality. Tech leaders need to ensure that the technical stack is AI-ready, build modern data architectures and co-create governance frameworks. Chief human resource officers need to integrate AI into HR processes, foster AI literacy, redesign roles and foster an innovation culture. The study found that the factors that separate high-performing CAIOs from their peers are measurement, teamwork and authority. Successful projects address high-impact areas like revenue growth, profit, customer satisfaction and employee productivity.


Mind the overconfidence gap: CISOs and staff don’t see eye to eye on security posture

“Executives typically rely on high-level reports and dashboards, whereas frontline practitioners see the day-to-day challenges, such as limitations in coverage, legacy systems, and alert fatigue — issues that rarely make it into boardroom discussions,” she says. “This disconnect can lead to a false sense of security at the top, causing underinvestment in areas such as secure development, threat modeling, or technical skills.” ... Moreover, the CISO’s rise in prominence and repositioning for business leadership may also be adding to the disconnect, according to Adam Seamons, information security manager at GRC International Group. “Many CISOs have shifted from being technical leads to business leaders. The problem is that in doing so, they can become distanced from the operational detail,” Seamons says. “This creates a kind of ‘translation gap’ between what executives think is happening and what’s actually going on at the coalface.” ... Without a consistent, shared view of risk and posture, strategy becomes fragmented, leading to a slowdown in decision-making or over- or under-investment in specific areas, which in turn create blind spots that adversaries can exploit. “Bridging this gap starts with improving the way security data is communicated and contextualized,” Forescout’s Ferguson advises. 


7 tips for a more effective multicloud strategy

For enterprises using dozens of cloud services from multiple providers, the level of complexity can quickly get out of hand, leading to chaos, runaway costs, and other issues. Managing this complexity needs to be a key part of any multicloud strategy. “Managing multiple clouds is inherently complex, so unified management and governance are crucial,” says Randy Armknecht, a managing director and global cloud practice leader at business advisory firm Protiviti. “Standardizing processes and tools across providers prevents chaos and maintains consistency,” Armknecht says. Cloud-native application protection platforms (CNAPP) — comprehensive security solutions that protect cloud-native applications from development to runtime — “provide foundational control enforcement and observability across providers,” he says. ... Protecting data in multicloud environments involves managing disparate APIs, configurations, and compliance requirements across vendors, Gibbons says. “Unlike single-cloud environments, multicloud increases the attack surface and requires abstraction layers [to] harmonize controls and visibility across platforms,” he says. Security needs to be uniform across all cloud services in use, Armknecht adds. “Centralizing identity and access management and enforcing strong data protection policies are essential to close gaps that attackers or compliance auditors could exploit,” he says.


Building Reproducible ML Systems with Apache Iceberg and SparkSQL: Open Source Foundations

Data lakes were designed for a world where analytics required running batch reports and maybe some ETL jobs. The emphasis was on storage scalability, not transactional integrity. That worked fine when your biggest concern was generating quarterly reports. But ML is different. ... Poor data foundations create costs that don't show up in any budget line item. Your data scientists spend most of their time wrestling with data instead of improving models. I've seen studies suggesting sixty to eighty percent of their time goes to data wrangling. That's... not optimal. When something goes wrong in production – and it will – debugging becomes an archaeology expedition. Which data version was the model trained on? What changed between then and now? Was there a schema modification that nobody documented? These questions can take weeks to answer, assuming you can answer them at all. ... Iceberg's hidden partitioning is particularly nice because it maintains partition structures automatically without requiring explicit partition columns in your queries. Write simpler SQL, get the same performance benefits. But don't go crazy with partitioning. I've seen teams create thousands of tiny partitions thinking it will improve performance, only to discover that metadata overhead kills query planning. Keep partitions reasonably sized (think hundreds of megabytes to gigabytes) and monitor your partition statistics.


The Creativity Paradox of Generative AI

Before talking about AI creation ability, we need to understand a simple linguistic limitation: despite the data used for these compositions having human meanings initially, i.e., being seen as information, after being de- and recomposed in a new, unknown way, these compositions do not have human interpretation, at least for a while, i.e., they do not form information. Moreover, these combinations cannot define new needs but rather offer previously unknown propositions to the specified tasks. ... Propagandists of know-it-all AI have a theoretical basis defined in the ethical principles that such an AI should realise and promote. Regardless of how progressive they sound, their core is about neo-Marxist concepts of plurality and solidarity. Plurality states that the majority of people – all versus you – is always right (while in human history it is usually wrong), i.e., if an AI tells you that your need is already resolved in the way that the AI articulated, you have to agree with it. Solidarity is, in essence, a prohibition of individual opinions and disagreements, even just slight ones, with the opinion of others; i.e., everyone must demonstrate solidarity with all. ... The know-it-all AI continuously challenges a necessity in the people’s creativity. The Big AI Brothers think for them, decide for them, and resolve all needs; the only thing that is required in return is to obey the Big AI Brother directives.


Doing More With Your Existing Kafka

The transformation into a real-time business isn’t just a technical shift, it’s a strategic one. According to MIT’s Center for Information Systems Research (CISR), companies in the top quartile of real-time business maturity report 62% higher revenue growth and 97% higher profit margins than those in the bottom quartile. These organizations use real-time data not only to power systems but to inform decisions, personalize customer experiences and streamline operations. ... When event streams are discoverable, secure and easy to consume, they are more likely to become strategic assets. For example, a Kafka topic tracking payment events could be exposed as a self-service API for internal analytics teams, customer-facing dashboards or third-party partners. This unlocks faster time to value for new applications, enables better reuse of existing data infrastructure, boosts developer productivity and helps organizations meet compliance requirements more easily. ... Event gateways offer a practical and powerful way to close the gap between infrastructure and innovation. They make it possible for developers and business teams alike to build on top of real-time data, securely, efficiently and at scale. As more organizations move toward AI-driven and event-based architectures, turning Kafka into an accessible and governable part of your API strategy may be one of the highest-leverage steps you can take, not just for IT, but for the entire business.


Meta-Learning: The Key to Models That Can "Learn to Learn"

Meta-learning is a field within machine learning that focuses on algorithms capable of learning how to learn. In traditional machine learning, an algorithm is trained on a specific dataset and becomes specialized for that task. In contrast, meta-learning models are designed to generalize across tasks, learning the underlying principles that allow them to quickly adapt to new, unseen tasks with minimal data. The idea is to make machine learning systems more like humans — able to leverage prior knowledge when facing new challenges. ... This is where meta-learning shines. By training models to adapt to new situations with few examples, we move closer to creating systems that can handle the diverse, dynamic environments found in the real world. ... Meta-learning represents the next frontier in machine learning, enabling models that are adaptable and capable of generalizing across a wide range of tasks with minimal data. By making machines more capable of learning from fewer examples, meta-learning has the potential to revolutionize fields like healthcare, robotics, finance, and more. While there are still challenges to overcome, the ongoing advancements in meta-learning techniques, such as few-shot learning, transfer learning, and neural architecture search, are making it an exciting area of research with vast potential for practical applications.


US govt, Big Tech unite to build one stop national health data platform

Under this framework, applications must support identity-proofing standards, consent management protocols, and Fast Healthcare Interoperability Resources (FHIR)-based APIs that allow for real-time retrieval of medical data across participating systems. The goal, according to CMS Administrator Chiquita Brooks-LaSure, is to create a “unified digital front door” to a patient’s health records that are accessible from any location, through any participating app, at any time. This unprecedented public-private initiative builds on rules first established under the 2016 21st Century Cures Act and expanded by the CMS Interoperability and Patient Access Final Rule. This rule mandates that CMS-regulated payers such as Medicare Advantage organizations, Medicaid programs, and Affordable Care Act (ACA)-qualified health plans make their claims, encounter data, lab results, provider remittances, and explanations of benefits accessible through patient-authorized APIs. ... ID.me, another key identity verification provider participating in the CMS initiative, has also positioned itself as foundational to the interoperability framework. The company touts its IAL2/AAL2-compliant digital identity wallet as a gateway to streamlined healthcare access. Through one-time verification, users can access a range of services across providers and government agencies without repeatedly proving their identity.


What Is Data Literacy and Why Does It Matter?

Building data literacy in an organization is a long-term project, often spearheaded by the chief data officer (CDO) or another executive who has a vision for instilling a culture of data in their company. In a report from the MIT Sloan School of Management, experts noted that to establish data literacy in a company, it’s important to first establish a common language so everyone understands and agrees on the definition of commonly used terms. Second, management should build a culture of learning and offer a variety of modes of training to suit different learning styles, such as workshops and self-led courses. Finally, the report noted that it’s critical to reward curiosity – if employees feel they’ll get punished if their data analysis reveals a weakness in the company’s business strategy, they’ll be more likely to hide data or just ignore it. Donna Burbank, an industry thought leader and the managing director of Global Data Strategy, discussed different ways to build data literacy at DATAVERSITY’s Data Architecture Online conference in 2021. ... Focusing on data literacy will help organizations empower their employees, giving them the knowledge and skills necessary to feel confident that they can use data to drive business decisions. As MIT senior lecturer Miro Kazakoff said in 2021: “In a world of more data, the companies with more data-literate people are the ones that are going to win.”


LLMs' AI-Generated Code Remains Wildly Insecure

In the past two years, developers' use of LLMs for code generation has exploded, with two surveys finding that nearly three-quarters of developers have used AI code generation for open source projects, and 97% of developers in Brazil, Germany, and India are using LLMs as well. And when non-developers use LLMs to generate code without having expertise — so-called "vibe coding" — the danger of security vulnerabilities surviving into production code dramatically increases. Companies need to figure out how to secure their code because AI-assisted development will only become more popular, says Casey Ellis, founder at Bugcrowd, a provider of crowdsourced security services. ... Veracode created an analysis pipeline for the most popular LLMs (declining to specify in the report which ones they tested), evaluating each version to gain data on how their ability to create code has evolved over time. More than 80 coding tasks were given to each AI chatbot, and the subsequent code was analyzed. While the earliest LLMs tested — versions released in the first half of 2023 — produced code that did not compile, 95% of the updated versions released in the past year produced code that passed syntax checking. On the other hand, the security of the code has not improved much at all, with about half of the code generated by LLMs having a detectable OWASP Top-10 security vulnerability, according to Veracode.

Daily Tech Digest - July 10, 2025


Quote for the day:

"Strive not to be a success, but rather to be of value." -- Albert Einstein


Domain-specific AI beats general models in business applications

Like many AI teams in the mid-2010s, Visma’s group initially relied on traditional deep learning methods such as recurrent neural networks (RNNs), similar to the systems that powered Google Translate back in 2015. But around 2020, the Visma team made a change. “We scrapped all of our development plans and have been transformer-only since then,” says Claus Dahl, Director ML Assets at Visma. “We realized transformers were the future of language and document processing, and decided to rebuild our stack from the ground up.” ... The team’s flagship product is a robust document extraction engine that processes documents in the countries where Visma companies are active. It supports a variety of languages. The AI could be used for documents such as invoices and receipts. The engine identifies key fields, such as dates, totals, and customer references, and feeds them directly into accounting workflows. ... “High-quality data is more valuable than high volumes. We’ve invested in a dedicated team that curates these datasets to ensure accuracy, which means our models can be fine-tuned very efficiently,” Dahl explains. This strategy mirrors the scaling laws used by large language models but tailors them for targeted enterprise applications. It allows the team to iterate quickly and deliver high performance in niche use cases without excessive compute costs.


The case for physical isolation in data centre security

Hardware-enforced physical isolation is fast becoming a cornerstone of modern cybersecurity strategy. These physical-layer security solutions allow your critical infrastructure – servers, storage and network segments – to be instantly disconnected on demand, using secure, out-of-band commands. This creates a last line of defence that holds even when everything else fails. After all, if malware can’t reach your system, it can’t compromise it. If a breach does occur, physical segmentation contains it in milliseconds, stopping lateral movement and keeping operations running without disruption. In stark contrast to software-only isolation, which relies on the very systems it seeks to protect, hardware isolation remains immune to tampering. ... When ransomware strikes, every second counts. In a colocation facility, traditional defences might flag the breach, but not before it worms its way across tenants. By the time alerts go out, the damage is done. With hardware isolation, there’s no waiting: the compromised tenant can be physically disconnected in milliseconds, before the threat spreads, before systems lock up, before wallets and reputations take a hit. What makes this model so effective is its simplicity. In an industry where complexity is the norm, physical isolation offers a simple, fundamental truth: you’re either connected or you’re not. No grey areas. No software dependency. Just total certainty.


Scaling without outside funding: Intuitive's unique approach to technology consulting

We think for any complex problem, a good 60–70% of it can be solved through innovation. That's always our first principle. Then where we see any inefficiencies; be it in workflows or process, automation works for the other 20% of the friction. The remaining 10–20% is where the engineering plays its important role, and it allows to touch on the scale, security and governance aspects. In data specifically, we are referencing the last 5–6 years of massive investments. We partner with platforms like Databricks and DataMiner and we've invested in companies like TESL and Strike AI for securing their AI models. ... In the cloud space, we see a shift from migration to modernisation (and platform engineering). Enterprises are focussing on modernisation of both applications and databases because those are critical levers of agility, security, and business value. In AI it is about data readiness; the majority of enterprise data is very fragmented or very poor quality which makes any AI effort difficult. Next is understanding existing processes—the way work is done at scale—which is critical for enabling GenAI. But the true ROI is Agentic AI—autonomous systems which don’t just tell you what to do, but just do it. We’ve been investing heavily in this space since 2018. 


The Future of Professional Ethics in Computing

Recent work on ethics in computing has focused on artificial intelligence (AI) with its success in solving problems, processing large amounts of data, and with the award of Nobel Prizes to AI researchers. Large language models and chatbots such as ChatGPT suggest that AI will continue to develop rapidly, acquire new capabilities, and affect many aspects of human existence. Many of the issues raised in the ethics of AI overlap previous discussions. The discussion of ethical questions surrounding AI is reaching a much broader audience, has more societal impact, and is rapidly transitioning to action through guidelines and the development of organizational structure, regulation, and legislation. ... Ethics of digital technologies in modern societies raises questions that traditional ethical theories find difficult to answer. Current socio-technical arrangements are complex ecosystems with a multitude of human and non-human stakeholders, influences, and relationships. The questions of ethics in ecosystems include: Who are members? On what grounds are decisions made and how are they implemented and enforced? Which normative foundations are acceptable? These questions are not easily answered. Computing professionals have important contributions to make to these discussions and should use their privileges and insights to help societies navigate them.


AI Agents Vs RPA: What Every Business Leader Needs To Know

Technically speaking, RPA isn’t intelligent in the same way that we might consider an AI system like ChatGPT to mimic some functions of human intelligence. It simply follows the same rules over and over again in order to spare us the effort of doing it. RPA works best with structured data because, unlike AI, it doesn't have the ability to analyze and understand unstructured data, like pictures, videos, or human language. ... AI agents, on the other hand, use language models and other AI technologies like computer vision to understand and interpret the world around them. As well as simply analyzing and answering questions about data, they are capable of taking action by planning how to achieve the results they want and interacting with third-party services to get it done. ... Using RPA, it would be possible to extract details about who sent the mail, the subject line, and the time and date it was sent. This can be used to build email databases and broadly categorize emails according to keywords. An agent, on the other hand, could analyze the sentiment of the email using language processing, prioritize it according to urgency, and even draft and send a tailored response. Over time, it learns how to improve its actions in order to achieve better resolutions.


How To Keep AI From Making Your Employees Stupid

Treat AI-generated content like a highly caffeinated first draft – full of energy, but possibly a little messy and prone to making things up. Your job isn’t to just hit “generate” and walk away unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss (or worse, your audience). Always, always edit aggressively, proofread and, most critically, fact-check every single output. This process isn’t just about catching AI’s mistakes; it actively engages your critical thinking skills, forcing you to verify information and refine expression. Think of it as intellectual calisthenics. ... Don’t settle for the first answer AI gives you. Engage in a dialogue. Refine your prompts, ask follow-up questions, request different perspectives and challenge its assumptions. This iterative process of refinement forces you to think more clearly about your own needs, to be precise in your instructions, and to critically evaluate the nuances of the AI’s response. ... The MIT study serves as a crucial wake-up call: over-reliance on AI can indeed make us “stupid” by atrophying our critical thinking skills. However, the solution isn’t to shun AI, but to engage with it intelligently and responsibly. By aggressively editing, proofreading and fact-checking AI outputs, by iteratively refining prompts and by strategically choosing the right AI tool for each task, we can ensure AI serves as a powerful enhancer, not a detrimental crutch.


What EU’s PQC roadmap means on the ground

The EU’s PQC roadmap is broadly aligned with that from NIST; both advise a phased migration to PQC with hybrid-PQC ciphers and hybrid digital certificates. These hybrid solutions provide the security promises of brand new PQC algorithms, whilst allowing legacy devices that do not support them, to continue using what’s now being called ‘classical cryptography’. In the first instance, both the EU and NIST are recommending that non-PQC encryption is removed by 2030 for critical systems, with all others following suit by 2035. While both acknowledge the ‘harvest now, decrypt later’ threat, neither emphasise the importance of understanding the cover time of data; nor reference the very recent advancements in quantum computing. With many now predicting the arrival of cryptographically relevant quantum computers (CRQC) by 2030, if organizations or governments have information with a cover time of five years or more, it is already too late for many to move to PQC in time. Perhaps the most significant difference that EU organizations will face compared to their American counterparts, is that the European roadmap is more than just advice; in time it will be enforced through various directives and regulations. PQC is not explicitly stated in EU regulations, although that is not surprising.


The trillion-dollar question: Who pays when the industry’s AI bill comes due?

“The CIO is going to be very, very busy for the next three, four years, and that’s going to be the biggest impact,” he says. “All of a sudden, businesspeople are starting to figure out that they can save a ton of money with AI, or they can enable their best performers to do the actual job.” Davidov doesn’t see workforce cuts matching AI productivity increases, even though some job cuts may be coming. ... “The costs of building out AI infrastructure will ultimately fall to enterprise users, and for CIOs, it’s only a question of when,” he says. “While hyperscalers and AI vendors are currently shouldering much of the expense to drive adoption, we expect to see pricing models evolve.” Bhathena advises CIOs to look beyond headline pricing because hidden costs, particularly around integrating AI with existing legacy systems, can quickly escalate. Organizations using AI will also need to invest in upskilling employees and be ready to navigate increasingly complex vendor ecosystems. “Now is the time for organizations to audit their vendor agreements, ensure contract flexibility, and prepare for potential cost increases as the full financial impact of AI adoption becomes clearer,” he says. ... Baker advises CIOs to be careful about their purchases of AI products and services and tie new deployments to business needs.


Multi-Cloud Adoption Rises to Boost Control, Cut Cost

Instead of building everything on one platform, IT leaders are spreading out their workloads, said Joe Warnimont, senior analyst at HostingAdvice. "It's no longer about chasing the latest innovation from a single provider. It's about building a resilient architecture that gives you control and flexibility for each workload." Cost is another major factor. Even though hyperscalers promote their pay-as-you-go pricing, many enterprises find it difficult to predict and manage costs at scale. This is true for companies running hundreds or thousands of workloads across different regions and teams. "You'd think that pay-as-you-go would fit any business model, but that's far from the case. Cost predictability is huge, especially for businesses managing complex budgets," Warnimont said. To gain more control over pricing and features, companies are turning to alternative cloud providers, such as DigitalOcean, Vultr and Backblaze. These platforms may not have the same global footprint as AWS or Azure but they offer specialized services, better pricing and flexibility for certain use cases. An organization needing specific development environments may go to DigitalOcean. Another may chose Vultr for edge computing. Sometimes the big players just don't offer what a specific workload requires. 


How CISOs are training the next generation of cyber leaders

While Abousselham champions a personalized, hands-on approach to developing talent, other CISOs are building more formal pathways to support emerging leaders at scale. For others like PayPal CISO Shaun Khalfan, structured development was always part of his career. He participated in formal leadership training programs offered by the Department of Defense and those run by the American Council for Technology. ... Structured development is also happening inside companies like the insurance brokerage firm Brown & Brown. CISO Barry Hensley supports an internal cohort program designed to identify and grow emerging leaders early in their careers. “We look at our – I’m going to call it newer or younger – employees,” he explains. “And if you become recognized in your first, second, or third year as having the potential to [become a leader], you get put in a program,” he explains. ... Khalfan believes good CISOs should be able to dive deep with engineers while also leading boardroom conversations. “It’s been a long time since I’ve written code,” he says, “but I at least understand how to have a deep conversation and also be able to have a board discussion with someone.” Abousselham agrees that technical experience is only one part of the puzzle. 

Daily Tech Digest - June 20, 2025


Quote for the day:

"Everything you’ve ever wanted is on the other side of fear." -- George Addair



Encryption Backdoors: The Security Practitioners’ View

On the one hand, “What if such access could deliver the means to stop crime, aid public safety and stop child exploitation?” But on the other hand, “The idea of someone being able to look into all private conversations, all the data connected to an individual, feels exposing and vulnerable in unimaginable ways.” As a security practitioner he has both moral and practical concerns. “Even if lawful access isn’t the same as mass surveillance, it would be difficult to distinguish between ‘good’ and ‘bad’ users without analyzing them all.” Morally, it is a reversal of the presumption of innocence and means no-one can have any guaranteed privacy. Professionally he says, “Once the encryption can be broken, once there is a backdoor allowing someone to access data, trust in that vendor will lessen due to the threat to security and privacy introducing another attack vector into the equation.” It is this latter point that is the focus for most security practitioners. “From a practitioner’s standpoint,” says Rob T Lee, chief of research at SANS Institute and founder at Harbingers, “we’ve seen time and again that once a vulnerability exists, it doesn’t stay in the hands of the ‘good guys’ for long. It becomes a target. And once it’s exploited, the damage isn’t theoretical. It affects real people, real businesses, and critical infrastructure.”


Visa CISO Subra Kumaraswamy on Never Allowing Cyber Complacency

Kumaraswamy is always thinking about talent and technology in cybersecurity. Talent is a perennial concern in the industry, and Visa is looking to grow its own. The Visa Payments Learning Program, launched in 2023, aims to help close the skills gap in cyber through training and certification. “We are offering this to all of the employees. We’re offering it to our partners, like the banks, our customers,” says Kumaraswamy. Right now, Visa leverages approximately 115 different technologies in cyber, and Kumaraswamy is constantly evaluating where to go next. “How do I [get to] the 116th, 117th, 181th?” he asks. ”That needs to be added because every layer counts.” Of course, GenAI is a part of that equation. Thus far, Kumaraswamy and his team are exploring more than 80 different GenAI initiatives within cyber. “We’ve already taken about three to four of those initiatives … to the entire company. That includes the what we call a ‘shift left’ process within Visa. It is now enabled with agentic AI. It’s reducing the time to find bugs in the code. It is also helping reduce the time to investigate incidents,” he shares. Visa is also taking its best practices in cybersecurity and sharing them with their customers. “We can think of this as value-added services to the mid-size banks, the credit unions, who don’t have the scale of Visa,” says Kumaraswamy.


Agentic AI in automotive retail: Creating always-on sales teams

To function effectively, digital agents need memory. This is where memory modules come into play. These components store key facts about ongoing interactions, such as the customer’s vehicle preferences, budget, and previous questions. For instance, if a returning visitor had previously shown interest in SUVs under a specific price range, the memory module allows the AI to recall that detail. Instead of restarting the conversation, the agent can pick up where it left off, offering an experience that feels personalised and informed. Memory modules are critical for maintaining consistency across long or repeated interactions. Without them, agentic AI would struggle to replicate the attentive service provided by a human salesperson who remembers returning customers. ... Despite the intelligence of agentic AI, there are scenarios where human involvement is still needed. Whether due to complex financing questions or emotional decision-making, some buyers prefer speaking to a person before finalizing their decision. A well-designed agentic system should recognize when it has reached the limits of its capabilities. In such moments, it should facilitate a handover to a human representative. This includes summarizing the conversation so far, alerting the sales team in real-time, and scheduling a follow-up if required.


Multicloud explained: Why it pays to diversify your cloud strategy

If your cloud provider were to suffer a massive and prolonged outage, that would have major repercussions on your business. While that’s pretty unlikely if you go with one of the hyperscalers, it’s possible with a more specialized vendor. And even with the big players, you may discover annoyances, performance problems, unanticipated charges, or other issues that might cause you to rethink your relationship. Using services from multiple vendors makes it easier to end a relationship that feels like it’s gone stale without you having to retool your entire infrastructure. It can be a great means to determine which cloud providers are best for which workloads. And it can’t hurt as a negotiating tactic when contracts expire or when you’re considering adding new cloud services. ... If you add more cloud resources by adding services from a different vendor, you’ll need to put in extra effort to get the two clouds to play nicely together, a process that can range from “annoying” to “impossible.” Even after bridging the divide, there’s administrative overhead involved—it’ll be harder to keep tabs on data protection and privacy, for instance, and you’ll need to track cloud usage and the associated costs for multiple vendors. Network bandwidth. Many vendors make it cheap and easy to move data to and within their cloud, but might make you pay a premium to export it. 


Decentralized Architecture Needs More Than Autonomy

Decentralized architecture isn’t just a matter of system design - it’s a question of how decisions get made, by whom, and under what conditions. In theory, decentralization empowers teams. In practice, it often exposes a hidden weakness: decision-making doesn’t scale easily. We started to feel the cracks as our teams expanded quickly and our organizational landscape became more complex. As teams multiplied, architectural alignment started to suffer - not because people didn’t care, but because they didn’t know how or when to engage in architectural decision-making. ... The shift from control to trust requires more than mindset - it needs practice. We leaned into a lightweight but powerful toolset to make decentralized decision-making work in real teams. Chief among them is the Architectural Decision Record (ADR). ADRs are often misunderstood as documentation artifacts. But in practice, they are confidence-building tools. They bring visibility to architectural thinking, reinforce accountability, and help teams make informed, trusted decisions - without relying on central authority. ... Decentralized architecture works best when decisions don’t happen in isolation. Even with good individual practices - like ADRs and advice-seeking - teams still need shared spaces to build trust and context across the organization. That’s where Architecture Advice Forums come in.


4 new studies about agentic AI from the MIT Initiative on the Digital Economy

In their study, Aral and Ju found that human-AI pairs excelled at some tasks and underperformed human-human pairs on others. Humans paired with AI were better at creating text but worse at creating images, though campaigns from both groups performed equally well when deployed in real ads on social media site X. Looking beyond performance, the researchers found that the actual process of how people worked changed when they were paired with AI . Communication (as measured by messages sent between partners) increased for human-AI pairs, with less time spent on editing text and more time spent on generating text and visuals. Human-AI pairs sent far fewer social messages, such as those typically intended to build rapport. “The human-AI teams focused more on the task at hand and, understandably, spent less time socializing, talking about emotions, and so on,” Ju said. “You don’t have to do that with agents, which leads directly to performance and productivity improvements.” As a final part of the study, the researchers varied the assigned personality of the AI agents using the Big Five personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. The AI personality pairing experiments revealed that programming AI personalities to complement human personalities greatly enhanced collaboration. 


DevOps Backup: Top Reasons for DevOps and Management

Depending on the industry, you may need to comply with different security protocols, acts, certifications, and standards. If your company operates in a highly regulated industry, like healthcare, technology, financial services, pharmaceuticals, manufacturing, or energy, those security and compliance regulations and protocols can be even more strict. Thus, to meet the compliance stringent security requirements, your organization needs to implement security measures, like role-based access controls, encryption, ransomware protection measures — just to name RTOs and RPOs, risk-assessment plans, and other compliance best practices… And, of course, a backup and disaster recovery plan is one of them, too. It ensures that the company will be able to restore its critical data fast, guaranteeing the data availability, accessibility, security, and confidentiality of your data. ... Another issue that is closely related to compliance is data retention. Some compliance regulations require organizations to keep their data for a long time. As an example, we can mention NIST’s requirements from its Security and Privacy Controls for Information Systems and Organizations: “… Storing audit records on separate systems or components applies to initial generation as well as backup or long-term storage of audit records…”


How AI can save us from our 'infinite' workdays, according to Microsoft

Activity is not the same as progress. What good is work if it's just busy work and not tackling the right tasks or goals? Here, Microsoft advises adopting the Pareto Principle, which postulates that 20% of the work should deliver 80% of the outcomes. And how does this involve AI? Use AI agents to handle low-value tasks, such as status meetings, routine reports, and administrative churn. That frees up employees to focus on deeper tasks that require the human touch. For this, Microsoft suggested watching the leadership keynote from the Microsoft 365 Community Conference on Building the Future Firm. ... Instead of using an org chart to delineate roles and responsibilities, turn to a work chart. A work chart is driven more by outcome, in which teams are organized around a specific goal. Here, you can use AI to fill in some of the gaps, again freeing up employees for more in-depth work. ... Finally, Microsoft pointed to a new breed of professionals known as agent bosses. They handle the infinite workday not by putting in more hours but by working smarter. One example cited in the report is Alex Farach, a researcher at Microsoft. Instead of getting swamped in manual work, Farach uses a trio of AI agents to act as his assistants. One collects daily research. The second runs statistical analysis. And the third drafts briefs to tie all the data together.
 

Data Governance and AI Governance: Where Do They Intersect?

AIG and DG share common responsibilities in guiding data as a product that AI systems create and consume, despite their differences. Both governance programs evaluate data integration, quality, security, privacy, and accessibility. For instance, both governance frameworks need to ensure quality information meets business needs. If a major retailer discovered their AI-powered product recommendation engine was suggesting irrelevant items to customers, then DG and AIG would want the issue resolved. However, either approach or a combination could be best to solving the problem. Determining the right governance response requires analyzing the root issue. ... DG and AIG provide different approaches; which works best depends on the problem. Take the example, above, of the inaccurate pricing information to a customer in response to a query. The data governance team audits the product data pipeline and finds inconsistent data standards and missing attributes feeding into the AI model. However, the AI governance team also identifies opportunities to enhance the recommendation algorithm’s logic for weighting customer preferences. The retailer could resolve the data quality issues through DG while AIG improved the AI model’s mechanics by taking a collaborative approach with both data governance and AI governance perspectives. 


Deepfake Rebellion: When Employees Become Targets

Surviving and mitigating such an attack requires moving beyond purely technological solutions. While AI detection tools can help, the first and most critical line of defense lies in empowering the human factor. A resilient organization builds its bulwarks on human risk management and security awareness training, specifically tailored to counter the mental manipulation inherent in deepfake attacks. Rapidly deploy trained ambassadors. These are not IT security personnel, but respected peers from diverse departments trained to coach workshops. ... Leadership must address employees first, acknowledge the incident, express understanding of the distress caused, and unequivocally state the deepfake is under investigation. Silence breeds speculation and distrust. There should be channels for employees to voice concerns, ask questions, and access support without fear of retribution. This helps to mitigate panic and rebuild a sense of community. Ensure a unified public response, coordinating Comms, Legal, and HR. ... The antidote to synthetic mistrust is authentic trust, built through consistent leadership, transparent communication, and demonstrable commitment to shared values. The goal is to create an environment where verification habits are second nature. It’s about discerning malicious fabrication from human error or disagreement.