Daily Tech Digest - August 25, 2025


Quote for the day:

"The pain you feel today will be the strength you feel tomorrow." -- Anonymous


Proactive threat intelligence boosts security & resilience

Threat intelligence is categorised into four key areas, each serving a unique purpose within an organisation. Strategic intelligence provides executives with a high-level overview, covering broad trends and potential impacts on the business, including financial or reputational ramifications. This level of intelligence guides investment and policy decisions. Tactical intelligence is aimed at IT managers and security architects. It details the tactics, techniques, and procedures (TTPs) of threat actors, assisting in strengthening defences and optimising security tools. Operational intelligence is important for security operations centre analysts, offering insights into imminent or ongoing threats by focusing on indicators of compromise (IoCs), such as suspicious IP addresses or file hashes. Finally, technical intelligence concerns the most detailed level of threat data, offering timely information on IoCs. While valuable, its relevance can be short-lived as attackers frequently change tactics and infrastructure. ... Despite these benefits, many organisations face significant hurdles. Building an in-house threat intelligence capability is described as requiring a considerable investment in specialised personnel, tools, and continual data analysis. For small and mid-sized organisations, this can be a prohibitive challenge, despite the increasing frequency of targeted attacks by sophisticated adversaries.


Data Is a Dish Best Served Fresh: “In the Wild” Versus Active Exploitation

Combating internet-wide opportunistic exploitation is a complex problem, with new vulnerabilities being weaponized at an alarming rate. In addition to the staggering increase in volume, attackers are getting better at exploiting zero-day vulnerabilities via APTs and criminals or botnets at much higher frequency, on a massive scale. The amount of time between disclosure of a new vulnerability and the start of active exploitation has been drastically reduced, leaving defenders with little time to react and respond. On the internet, the difference between one person observing something and everyone else seeing it is often quantified in just minutes. ... Generally speaking, a lot of work goes into weaponizing a software vulnerability. It’s deeply challenging and requires advanced technical skill. We tend to sometimes forget that attackers are deeply motivated by profit, just like businesses are. If attackers think something is a dead end, they won’t want to invest their time. So, investigating what attackers are up to via proxy is a good way to understand how much you need to care about a specific vulnerability. ... These targeted attacks threaten to circumvent existing defense capabilities and expose organizations to a new wave of disruptive breaches. In order to adequately protect their networks, defenders must evolve in response. Ultimately, there is no such thing and a set-and-forget single source of truth for cybersecurity data.


Quietly Fearless Leadership for 4 Golden Signals

Most leadership mistakes start with a good intention and a calendar invite. We’ve learned to lead by subtraction. It’s disarmingly simple: before we introduce a new ritual, tool, or acronym, we delete something that’s already eating cycles. If we can’t name what gets removed, we hold the idea until we can. The reason’s pragmatic: teams don’t fail because they lack initiatives; they fail because they’re full. ... As leaders, we also protect deep work. We move approvals to asynchronous channels and time-box them. Our job is to reduce decision queue time, not to write longer memos. Subtraction leadership signals trust. It says, “We believe you can do the job without us narrating it.” We still set clear constraints—budgets, reliability targets, security boundaries—but within those, we make space. ... Incident leadership isn’t a special hat; it’s a practiced ritual. We use the same six steps every time so people can stay calm and useful: declare, assign, annotate, stabilize, learn, thank. One sentence each: we declare loudly with a unique ID; we assign an incident commander who doesn’t touch keyboards; we annotate a live timeline; we stabilize by reducing blast radius; we learn with a blameless writeup; we thank the humans who did the work. Yes, every time. We script away friction. A tiny helper creates the channel, pins the template, and tags the right folks, so no one rifles through docs when cortisol’s high.


Private AI is the Future of BFSI Sector: Here’s Why

The public cloud, while offering initial scalability, presents significant hurdles for the Indian BFSI sector. Financial institutions manage vast troves of sensitive data. Storing and processing this data in a shared, external environment introduces unacceptable cyber risks. This is particularly critical in India, where regulators like the Reserve Bank of India (RBI) have stringent data localisation policies, making data sovereignty non-negotiable. ... Private AI offers a powerful solution to these challenges by creating a zero-trust, air-gapped environment. It keeps data and AI models on-premise, allowing institutions to maintain absolute control over their most valuable assets. It complies with regulatory mandates and global standards, mitigating the top barriers to AI adoption. The ability to guarantee that sensitive data never leaves the organisation’s infrastructure is a competitive advantage that public cloud offerings simply cannot replicate. ... For a heavily-regulated industry like BFSI, reaching such a level of automation and complying with regulations is quite the challenge. Private AI knocks it out of the park, paving the way for a truly secure and autonomous future. For the Indian BFSI sector, this means a significant portion of clerical and repetitive tasks will be handled by these AI-FTEs, allowing for a strategic redeployment of human capital into supervisory roles, which will, in turn, flatten organisational structures and boost retention.


Cyber moves from back office to boardroom – and investors are paying attention

Greater awareness has emerged as businesses shift from short-term solutions adopted during the pandemic to long-term, strategic partnerships with specialist cyber security providers. Increasingly, organizations recognize that cyber security requires an integrated approach involving continuous monitoring and proactive risk management. ... At the same time, government regulation is putting company directors firmly on the hook. The UK’s proposed Cyber Security and Resilience Bill will make senior executives directly accountable for managing cyber risks and ensuring operational resilience, bringing the UK closer to European frameworks like the NIS2 Directive and DORA. This is changing how cyber security is viewed at the top. It’s not just about ticking boxes or passing audits. It is now a central part of good governance. For investors, strong cyber capabilities are becoming a mark of well-run companies. For acquirers, it’s becoming a critical filter for M&A, particularly when dealing with businesses that hold sensitive data or operate critical systems. This regulatory push is part of a broader global shift towards greater accountability. In response, businesses are increasingly adopting governance models that embed cyber risk management into their strategic decision-making processes. 


Why satellite cybersecurity threats matter to everyone

There are several practices to keep in mind for developing a secure satellite architecture. First, establish situational awareness across the five segments of space by monitoring activity. You cannot protect what you cannot see, and there is limited real-time visibility into the cyber domain, which is critical to space operations. Second, be threat-driven when mitigating cyber risks. Vulnerability does not necessarily equal mission risk. It is important to prioritize mitigating those vulnerabilities that impact the particular mission of that small satellite. Third, make every space professional a cyber safety officer. Unlike any other domain, there are no operations in space without the cyber domain. Emotionally connecting the safety of the cyber domain to space mission outcomes is imperative. When designing a secure satellite architecture, it is critical to design with the probability of cyber security compromises front of mind. It is not realistic to design a completely “non-hackable” architecture. However, it is realistic to design an architecture that balances protection and resilience, designing protections that make the cost of compromise high for the adversary, and resilience that makes the cost of compromise low for the mission. Security should be built in at the lowest abstraction layer of the satellite, including containerization, segmentation, redundancy and compartmentalization.


Tiny quantum dots unlock the future of unbreakable encryption

For four decades, the holy grail of quantum key distribution (QKD) -- the science of creating unbreakable encryption using quantum mechanics -- has hinged on one elusive requirement: perfectly engineered single-photon sources. These are tiny light sources that can emit one particle of light (photon) at a time. But in practice, building such devices with absolute precision has proven extremely difficult and expensive. To work around that, the field has relied heavily on lasers, which are easier to produce but not ideal. These lasers send faint pulses of light that contain a small, but unpredictable, number of photons -- a compromise that limits both security and the distance over which data can be safely transmitted, as a smart eavesdropper can "steal" the information bits that are encoded simultaneously on more than one photon. ... To prove it wasn't just theory, the team built a real-world quantum communication setup using a room-temperature quantum dot source. They ran their new reinforced version of the well-known BB84 encryption protocol -- the backbone of many quantum key distribution systems -- and showed that their approach was not only feasible but superior to existing technologies. What's more, their approach is compatible with a wide range of quantum light sources, potentially lowering the cost and technical barriers to deploying quantum-secure communication on a large scale.


Are regulatory frameworks fueling innovation or stalling expansion in the data center market?

On a basic level, demonstrating the broader value of a data center to its host market, whether through job creation or tax revenues, helps ensure alignment with evolving regulatory frameworks and reinforces confidence among financial institutions. From banks to institutional investors, visible community and policy alignment help de-risk these capital-intensive projects and strengthen the case for long-term investment. ... With regulatory considerations differing significantly from region to region, data center market growth isn’t linear. In the Middle East, for example, where policy is supportive and there is significant capital investment, it's somewhat easier to build and operate a data center than in places like the EU, where regulation is far more complex. Taking the UAE as an example, regulatory frameworks in the GCC around data sovereignty require data of national importance to be stored in the country of origin. ... In this way, the regulatory and data sovereignty policies are driving the need for localized data centers. However, due to the borderless nature of the digital economy, there is also a growing need for data centers to become location-agnostic, so that data can move in and out of regions with different regulatory frameworks and customers can establish global, not just local, hubs. 


Cross border seamless travel Is closer than you think

At the heart of this transformation is the Digital Travel Credential (DTC), developed by the International Civil Aviation Organization (ICAO). The DTC is a digital replica of your passport, securely stored and ready to be shared at the tap of a screen. But here’s the catch: the current version of the DTC packages all your passport information – name, number, nationality, date of birth – into one file. That works well for border agencies, who need the full picture. But airlines? They typically only require a few basic details to complete check-in and security screening. Sharing the entire passport file just to access your name and date of birth isn’t just inefficient and it’s a legal problem in many jurisdictions. Under data protection laws like the EU’s GDPR, collecting more personal information than necessary is a breach. ... While global standards take time to update, the aviation industry is already moving forward. Airlines, airports, and governments are piloting digital identity programs (using different forms of digital ID) and biometric journeys built around the principles of consent and minimal data use. IATA’s One ID framework is central to this momentum. One ID defines how a digital identity like the DTC can be used in practice: verifying passengers, securing consent, and enabling a paperless journey from curb to gate.


Tackling cybersecurity today: Your top challenge and strategy

The rise of cloud-based tools and hybrid work has made it easier than ever for employees to adopt new apps or services without formal review. While the intent is often to move faster or collaborate better, these unapproved tools open doors to data exposure, regulatory gaps, and untracked vendor risk. Our approach is to bring Shadow IT into the light. Using TrustCloud’s platform, organizations can automatically discover unmanaged applications, flag unauthorized connections, and map them to the relevant compliance controls. ... Shadow IT’s impact goes beyond convenience. Unvetted tools can expose sensitive data, introduce compliance gaps, and create hidden third-party dependencies. The stakes are even higher in regulated industries, where a single misstep can result in financial penalties or reputational damage. Analysts like Gartner predict that by 2027, nearly three-quarters of employees will adopt technology outside the IT team’s visibility, a staggering shift that leaves cybersecurity and compliance teams racing to maintain control. ... Without visibility and controls, every unsanctioned tool becomes a potential weak spot, complicating threat detection, increasing exposure to regulatory penalties, and making incident response far more challenging. For security and compliance teams, managing Shadow IT isn’t just about locking things down; it’s about regaining oversight and trust in an environment where technology adoption is decentralized and constant.

Daily Tech Digest - August 24, 2025


Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan, but also believe." -- Anatole France



Creating the ‘AI native’ generation: The role of digital skills in education

Boosting AI skills has the potential to drive economic growth and productivity and create jobs, but ambition must be matched with effective delivery. We must ensure AI is integrated into education in a way that encourages students to maintain critical thinking skills, skeptically assess AI outputs, and use it responsibly and ethically. Education should also inspire future tech talent and prepare them for the workplace. ... AI fluency is only one part of the picture. Amid a global skills gap, we also need to capture the imaginations of young people to work in tech. To achieve this, AI and technology education must be accessible, meaningful, and aspirational. That requires coordinated action from schools, industry, and government to promote the real-world impact of digital skills and create clearer, more inspiring pathways into tech careers and expose students how AI is applied in various professions. Early exposure to AI can do far more than build fluency, it can spark curiosity, confidence and career ambition towards high-value sectors like data science, engineering and cybersecurity—areas where the UK must lead. ... Students who learn how to use AI now will build the competencies that industries want and need for years to come. But this will form the first stage of a broader AI learning arc where learning and upskilling become a lifelong mindset, not a single milestone. 


What is the State of SIEM?

In addition to high deployment costs, many organizations grapple with implementing SIEM. A primary challenge is SIEM configuration -- given that the average organization has more than 100 different data sources that must plug into the platform, according to an IDC report. It can be daunting for network staff to do the following when deploying SIEM: Choose which data sources to integrate; Set up SIEM correlation rules that define what will be classified as a security event; and Determine the alert thresholds for specific data and activities. It's equally challenging to manage the information and alerts a SIEM platform issues. If you fine-tune too much, the result might be false positives as the system triggers alarms about events that aren't actually threats. This is a time-stealer for network techs and can lead to staff fatigue and frustration. In contrast, if the calibration is too liberal, organizations run the risk of overlooking something that could be vital. Network staff must also coordinate with other areas of IT and the company. For example, what if data safekeeping and compliance regulations change? Does this change SIEM rule sets? What if the IT applications group rolls out new systems that must be attached to SIEM? Can the legal department or auditors tell you how long to store and retain data for eDiscovery or for disaster backup and recovery? And which data noise can you discard as waste?


AI Data Centers: A Popular Term That’s Hard to Define

The tricky thing about trying to define AI data centers based on characteristics like those described above is that none of those features is unique to AI data centers. For example, hyperscale data centers – meaning very large facilities capable of accommodating more than a hundred thousand servers in some cases – existed before modern AI debuted. AI has made large-scale data centers more important because AI workloads require vast infrastructures, but it’s not as if no one was building large data centers before AI rose to prominence. Likewise, it has long been possible to deploy GPU-equipped servers in data centers. ... Likewise, advanced cooling systems and innovative approaches to data center power management are not unique to the age of generative AI. They, too, predated AI data centers. ... Arguably, an AI data center is ultimately defined by what it does (hosting AI workloads) more than by how it does it. So, before getting hung up on the idea that AI requires investment in a new generation of data centers, it’s perhaps healthier to think about how to leverage the data centers already in existence to support AI workloads. That perspective will help the industry avoid the risk of overinvesting in new data centers designed specifically for AI – and as a bonus, it may save money by allowing businesses to repurpose the data centers they already own to meet their AI needs as well.


Password Managers Vulnerable to Data Theft via Clickjacking

Tóth showed how an attacker can use DOM-based extension clickjacking and the autofill functionality of password managers to exfiltrate sensitive data stored by these applications, including personal data, usernames and passwords, passkeys, and payment card information. The attacks demonstrated by the researcher require 0-5 clicks from the victim, with a majority requiring only one click on a harmless-looking element on the page. The single-click attacks often involved exploitation of XSS or other vulnerabilities. DOM, or Document Object Model, is an object tree created by the browser when it loads an HTML or XML web page. ... Tóth’s attack involves a malicious script that manipulates user interface elements injected by browser extensions into the DOM. “The principle is that a browser extension injects elements into the DOM, which an attacker can then make invisible using JavaScript,” he explained. According to the researcher, some of the vendors have patched the vulnerabilities, but fixes have not been released for Bitwarden, 1Password, iCloud Passwords, Enpass, LastPass, and LogMeOnce. SecurityWeek has reached out to these companies for comment. Bitwarden said a fix for the vulnerability is being rolled out this week with version 2025.8.0. LogMeOnce said it’s aware of the findings and its team is actively working on resolving the issue through a security update.


Iskraemeco India CEO: ERP, AI, and the future of utility leadership

We see a clear convergence ahead, where ERP systems like Infor’s will increasingly integrate with edge AI, embedded IoT, and low-code automation to create intelligent, responsive operations. This is especially relevant in utility scenarios where time-sensitive data must drive immediate action. For instance, our smart kits – equipped with sensor technology – are being designed to detect outages in real time and pinpoint exact failure points, such as which pole needs service during a natural disaster. This type of capability, powered by embedded IoT and edge computing, enables decisions to be made closer to the source, reducing downtime and response lag.  ... One of the most important lessons we've learned is that success in complex ERP deployments is less about customisation and more about alignment, across leadership, teams, and technology. In our case, resisting the urge to modify the system and instead adopting Infor’s best-practice frameworks was key. It allowed us to stay focused, move faster, and ensure long-term stability across all modules. In a multi-stakeholder environment – where regulatory bodies, internal departments, and technology partners are all involved – clarity of direction from leadership made all the difference. When the expectation is clear that we align to the system, and not the other way around, it simplifies everything from compliance to team onboarding.


Experts Concerned by Signs of AI Bubble

"There's a huge boom in AI — some people are scrambling to get exposure at any cost, while others are sounding the alarm that this will end in tears," Kai Wu, founder and chief investment officer of Sparkline Capital, told the Wall Street Journal last year. There are even doubters inside the industry. In July, recently ousted CEO of AI company Stability AI Emad Mostaque told banking analysts that "I think this will be the biggest bubble of all time." "I call it the 'dot AI’ bubble, and it hasn’t even started yet," he added at the time. Just last week, Jeffrey Gundlach, billionaire CEO of DoubleLine Capital, also compared the AI craze to the dot com bubble. "This feels a lot like 1999," he said during an X Spaces broadcast last week, as quoted by Business Insider. "My impression is that investors are presently enjoying the double-top of the most extreme speculative bubble in US financial history," Hussman Investment Trust president John Hussman wrote in a research note. In short, with so many people ringing the alarm bells, there could well be cause for concern. And the consequences of an AI bubble bursting could be devastating. ... While Nvidia would survive such a debacle, the "ones that are likely to bear the brunt of the correction are the providers of generative AI services who are raising money on the promise of selling their services for $20/user/month," he argued.


OpenCUA’s open source computer-use agents rival proprietary models from OpenAI and Anthropic

Computer-use agents are designed to autonomously complete tasks on a computer, from navigating websites to operating complex software. They can also help automate workflows in the enterprise. However, the most capable CUA systems are proprietary, with critical details about their training data, architectures, and development processes kept private. “As the lack of transparency limits technical advancements and raises safety concerns, the research community needs truly open CUA frameworks to study their capabilities, limitations, and risks,” the researchers state in their paper. ... The tool streamlines data collection by running in the background on an annotator’s personal computer, capturing screen videos, mouse and keyboard inputs, and the underlying accessibility tree, which provides structured information about on-screen elements.  ... The key insight was to augment these trajectories with chain-of-thought (CoT) reasoning. This process generates a detailed “inner monologue” for each action, which includes planning, memory, and reflection. This structured reasoning is organized into three levels: a high-level observation of the screen, reflective thoughts that analyze the situation and plan the next steps, and finally, the concise, executable action. This approach helps the agent develop a deeper understanding of the tasks.


How to remember everything

MyMind is a clutter-free bookmarking and knowledge-capture app without folders or manual content organization.There are no templates, manual customizations, or collaboration tools. Instead, MyMind recognizes and formats the content type elegantly. For example, songs, movies, books, and recipes are displayed differently based on MyMind’s detection, regardless of the source, as are pictures and videos. MyMind uses AI to auto-tag everything and allows custom tags. Every word, including those in pictures, is indexed. You can take pictures of information, upload them to MyMind, and find them later by searching a word or two found in the picture. Copying a sentence or paragraph from an article will display the quote with a source link. Every data chunk is captured in a “card.” ... Alongside AI-enabled lifelogging tools like MyMind, we’re also entering an era of lifelogging hardware devices. One promising direction comes from a startup called Brilliant Labs. Its new $299 Halo glasses, available for pre-order and shipping in November, are lightweight AI glasses. The glasses have a long list of features — bone conduction sound, a camera, light weight, etc. — but the lifelogging enabler is an “agentic memory” system called Narrative. It captures information automatically from the camera and microphones and places it into a personal knowledge base. 


From APIs to Digital Twins: Warehouse Integration Strategies for Smarter Supply Chains

Digital twins create virtual replicas of warehouses and supply chains for monitoring and testing. A digital twin ingests live data from IoT sensors, machines, and transportation feeds to simulate how changes affect outcomes. For instance, GE’s “Digital Wind Farm” project feeds sensor data from each turbine into a cloud model, suggesting performance tweaks that boost energy output by ~20% (worth ~$100M more revenue per turbine). In warehousing, digital twins can model workflows (layout changes, staffing shifts, equipment usage) to identify bottlenecks or test improvements before physical changes. Paired with AI, these twins become predictive and prescriptive: companies can run thousands of what-if scenarios (like a port strike or demand surge) and adjust plans accordingly. ... Today’s warehouses are not just storage sheds; they are smart, interconnected nodes in the supply chain. Leveraging IIoT sensors, cloud APIs, AI analytics, robotics, and digital twins transforms logistics into a competitive advantage. Integrated systems reduce manual handoffs and errors: for example, automated picking and instant carrier booking can shorten fulfillment cycles from days to hours. Industry data bear this out, deploying these technologies can improve on-time delivery by ~20% and significantly lower operating costs.


Enterprise Software Spending Surges Despite AI ROI Shortfalls

AI capabilities increasingly drive software purchasing decisions. However, many organizations struggle with the gap between AI promise and practical ROI delivery. The disconnect stems from fundamental challenges in data accessibility and contextual understanding. Current AI implementations face significant obstacles in accessing the full spectrum of contextual data required for complex decision-making. "In complex use cases, where the exponential benefits of AI reside, AI still feels forced and contrived when it doesn't have the same amount and depth of contextual data required to read a situation," Kirkpatrick explained. Effective AI implementation requires comprehensive data infrastructure investments. Organizations must ensure AI models can access approved data sources while maintaining proper guardrails. Many IT departments are still working to achieve this balance. The challenge intensifies in environments where AI needs to integrate across multiple platforms and data sources. Well-trained humans often outperform AI on complex tasks because their experience allows them to read multiple factors and adjust contextually. "For AI to mimic that experience, it requires a wide range of data that can address factors across a wide range of dimensions," Kirkpatrick said. "That requires significant investment in data to ensure the AI has the information it needs at the right time, with the proper context, to function seamlessly, effectively, and efficiently."

Daily Tech Digest - August 23, 2025


Quote for the day:

"Failure is the condiment that gives success its flavor." -- Truman Capote


Enterprise passwords becoming even easier to steal and abuse

Attackers actively target user credentials because they offer the most direct route or foothold into a targeted organization’s network. Once inside, attackers can move laterally across systems, searching for other user accounts to compromise, or they attempt to escalate their privileges and gain administrative control. This hunt for credentials extends beyond user accounts to include code repositories, where developers may have hard-coded access keys and other secrets into application source code. Attacks using valid credentials were successful 98% of the time, according to Picus Security. ... “CISOs and security teams should focus on enforcing strong, unique passwords, using MFA everywhere, managing privileged accounts rigorously and testing identity controls regularly,” Curran says. “Combined with well-tuned DLP and continuous monitoring that can detect abnormal patterns quickly, these measures can help limit the impact of stolen or cracked credentials.” Picus Security’s latest findings reveal a concerning gap between the perceived protection of security tools and their actual performance. An overall protection effectiveness score of 62% contrasts with a shockingly low 3% prevention rate for data exfiltration. “Failures in detection rule configuration, logging gaps and system integration continue to undermine visibility across security operations,” according to Picus Security.


Architecting the next decade: Enterprise architecture as a strategic force

In an age of escalating cyber threats and expanding digital footprints, security can no longer be layered on; it must be architected in from the start. With the rise of AI, IoT and even quantum computing on the horizon, the threat landscape is more dynamic than ever. Security-embedded architectures prioritize identity-first access control, continuous monitoring and zero-trust principles as baseline capabilities. ... Sustainability is no longer a side initiative; it’s becoming a first principle of enterprise architecture. As organizations face pressure from regulators, investors and customers to lower their carbon footprint, digital sustainability is gaining traction as a measurable design objective. From energy-efficient data centers to cloud optimization strategies and greener software development practices, architects are now responsible for minimizing the environmental impact of IT systems. The Green Software Foundation has emerged as a key ecosystem partner, offering measurement standards like software carbon intensity (SCI) and tooling for emissions-aware development pipelines. ... Technology leaders must now foster a culture of innovation, build interdisciplinary partnerships and enable experimentation while ensuring alignment with long-term architectural principles. They must guide the enterprise through both transformation and stability, navigating short-term pressures and long-term horizons simultaneously.


Capitalizing on Digital: Four Strategic Imperatives for Banks and Credit Unions

Modern architectures dissolve the boundary between core and digital. The digital banking solution is no longer a bolt-on to the core; the core and digital come together to form the accountholder experience. That user experience is delivered through the digital channel, but when done correctly, it’s enabled by the modern core. Among other things, the core transformation requires robust use of shared APIs, consistent data structures, and unified development teams. Leading financial institutions are coming to realize that core evaluations now must include an evaluable of their capability to enable the digital experience. Criteria like Availability, Reliability, Real-time, Speed and Security are now emerging as foundational requirements of a core to enable the digital experience. "If your core can’t keep up with your digital, you’re stuck playing catch-up forever," said Jack Henry’s Paul Wiggins, Director of Sales, Digital Engineers. ... Many institutions still operate with digital siloed in one department, while marketing, product, and operations pursue separate agendas. This leads to mismatched priorities — products that aren’t promoted effectively, campaigns that promise features operations can’t support, and technical fixes that don’t address the root cause of customer and member pain points. ... Small-business services are a case in point. Jack Henry’s Strategy Benchmark study found that 80% of CEOs plan to expand these services over the next two years. 


Bentley Systems CIO Talks Leadership Strategy and AI Adoption

The thing that’s really important for a CIO to be thinking about is that we are a microcosm for how all of the business functions are trying to execute the tactics against the strategy. What we can do across the portfolio is represent the strategy in real terms back to the business. We can say: These are all of the different places where we're thinking about investing. Does that match with the strategy we thought we were setting for ourselves? And where is there a delta and a difference? ... When I got my first CIO role, there was all of this conversation about business process. That was the part that I had to learn and figure out how to map into these broader, strategic conversations. I had my first internal IT role at Deutsche Bank, where we really talked about product model a lot -- thinking about our internal IT deliverables as products. When I moved to Lenovo, we had very rich business process and transformation conversations because we were taking the whole business through such a foundational change. I was able to put those two things together. It was a marriage of several things: running a product organization; marrying that to the classic IT way of thinking about business process; and then determining how that becomes representative to the business strategy.


What Is Active Metadata and Why Does It Matter?

Active metadata addresses the shortcomings of passive approaches by automatically updating the metadata whenever an important aspect of the information changes. Defining active metadata and understanding why it matters begins by looking at the shift in organizations’ data strategies from a focus on data acquisition to data consumption. The goal of active metadata is to promote the discoverability of information resources as they are acquired, adapted, and applied over time. ... From a data consumer’s perspective, active metadata adds depth and breadth to their perception of the data that fuels their decision-making. By highlighting connections between data elements that would otherwise be hidden, active metadata promotes logical reasoning about data assets. This is especially so when working on complex problems that involve a large number of disconnected business and technical entities.The active metadata analytics workflow orchestrates metadata management across platforms to enhance application integration, resource management, and quality monitoring. It provides a single, comprehensive snapshot of the current status of all data assets involved in business decision-making. The technology augments metadata with information gleaned from business processes and information systems. 


Godrej Enterprises CHRO on redefining digital readiness as culture, not tech

“Digital readiness at Godrej Enterprises Group is about empowering every employee to thrive in an ever-evolving landscape,” Kaur said. “It’s not just about technology adoption. It’s about building a workforce that is agile, continuously learning, and empowered to innovate.” This reframing reflects a broader trend across Indian industry, where digital transformation is no longer confined to IT departments but runs through every layer of an organisation. For Godrej Enterprises Group, this means designing a workplace where intrapreneurship is rewarded, innovation is constant, and employees are trained to think beyond immediate functions. ... “We’ve moved away from one-off training sessions to creating a dynamic ecosystem where learning is accessible, relevant, and continuous,” she said. “Learning is no longer a checkbox — it’s a shared value that energises our people every day.” This shift is underpinned by leadership development programmes and innovation platforms, ensuring that employees at every level are encouraged to experiment and share knowledge.  ... “We see digital skilling as a core business priority, not just an HR or L&D initiative,” she said. “By making digital skilling a shared responsibility, we foster a culture where learning is continuous, progress is visible, and success is celebrated across the organisation.”


AI is creeping into the Linux kernel - and official policy is needed ASAP

However, before you get too excited, he warned: "This is a great example of what LLMs are doing right now. You give it a small, well-defined task, and it goes and does it. And you notice that this patch isn't, 'Hey, LLM, go write me a driver for my new hardware.' Instead, it's very specific -- convert this specific hash to use our standard API." Levin said another AI win is that "for those of us who are not native English speakers, it also helps with writing a good commit message. It is a common issue in the kernel world where sometimes writing the commit message can be more difficult than actually writing the code change, and it definitely helps there with language barriers." ... Looking ahead, Levin suggested LLMs could be trained to become good Linux maintainer helpers: "We can teach AI about kernel-specific patterns. We show examples from our codebase of how things are done. It also means that by grounding it into our kernel code base, we can make AI explain every decision, and we can trace it to historical examples." In addition, he said the LLMs can be connected directly to the Linux kernel Git tree, so "AI can go ahead and try and learn things about the Git repo all on its own." ... This AI-enabled program automatically analyzes Linux kernel commits to determine whether they should be backported to stable kernel trees. The tool examines commit messages, code changes, and historical backporting patterns to make intelligent recommendations.


Applications and Architecture – When It’s Not Broken, Should You Try to Fix It?

No matter how reliable your application components are, they will need to be maintained, upgraded or replaced at some point. As elements in your application evolve, some will reach end of life status – for example, Redis 7.2 will reach end of life status for security updates in February 2026. Before that point, it’s necessary to assess the available options. For businesses in some sectors like financial services, running out of date and unsupported software is a potential failure for regulations on security and resilience. For example, the Payment Card Industry Data Security Standard version 4.0 enforces that teams should check all their software and hardware is supported every year; in the case of end of life software, teams must also provide a full plan for migration that will be completed within twelve months. ... For developers and software architects, understanding the role that any component plays in the overall application makes it easier to plan ahead. Even the most reliable and consistent component may need to change given outside circumstances. In the Discworld series, golems are so reliable that they become the standard for currency; at the same time, there are so many of them that any problem could affect the whole economy. When it comes to data caching, Redis has been a reliable companion for many developers. 


From cloud migration to cloud optimization

The report, based on insights from more than 2,000 IT leaders, reveals that a staggering 94% of global IT leaders struggle with cloud cost optimization. Many enterprises underestimate the complexities of managing public cloud resources and the inadvertent overspending that occurs from mismanagement, overprovisioning, or a lack of visibility into resource usage. This inefficiency goes beyond just missteps in cloud adoption. It also highlights how difficult it is to align IT cost optimization with broader business objectives. ... This growing focus sheds light on the rising importance of finops (financial operations), a practice aimed at bringing greater financial accountability to cloud spending. Adding to this complexity is the increasing adoption of artificial intelligence and automation tools. These technologies drive innovation, but they come with significant associated costs. ... The argument for greater control is not new, but it has gained renewed relevance when paired with cost optimization strategies. ... With 41% of respondents’ IT budgets still being directed to scaling cloud capabilities, it’s clear that the public cloud will remain a cornerstone of enterprise IT in the foreseeable future. Cloud services such as AI-powered automation remain integral to transformative business strategies, and public cloud infrastructure is still the preferred environment for dynamic, highly scalable workloads. Enterprises will need to make cloud deployments truly cost-effective.


The Missing Layer in AI Infrastructure: Aggregating Agentic Traffic

Software architects and engineering leaders building AI-native platforms are starting to notice familiar warning signs: sudden cost spikes on AI API bills, bots with overbroad permissions tapping into sensitive data, and a disconcerting lack of visibility or control over what these AI agents are doing. It’s a scenario reminiscent of the early days of microservices – before we had gateways and meshes to restore order – only now the "microservices" are semi-autonomous AI routines. Gartner has begun shining a spotlight on this emerging gap. ... Every major shift in software architecture eventually demands a mediation layer to restore control. When web APIs took off, API gateways became essential for managing authentication/authorization, rate limits, and policies. With microservices, service meshes emerged to govern internal traffic. Each time, the need only became clear once the pain of scale surfaced. Agentic AI is on the same path. Teams are wiring up bots and assistants that call APIs independently - great for demos ... So, what exactly is an AI Gateway? At its core, it’s a middleware component – either a proxy, service, or library – through which all AI agent requests to external services are channeled. Rather than letting each agent independently hit whatever API it wants, you route those calls via the gateway, which can then enforce policies and provide central management.



Daily Tech Digest - August 22, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


Leveraging DevOps to accelerate the delivery of intelligent and autonomous care solutions

Fast iteration and continuous delivery have become standard in industries like e-commerce and finance. Healthcare operates under different rules. Here, the consequences of technical missteps can directly affect care outcomes or compromise sensitive patient information. Even a small configuration error can delay a diagnosis or impact patient safety. That reality shifts how DevOps is applied. The focus is on building systems that behave consistently, meet compliance standards automatically, and support reliable care delivery at every step. ... In many healthcare environments, developers are held back by slow setup processes and multi-step approvals that make it harder to contribute code efficiently or with confidence. This often leads to slower cycles and fragmented focus. Modern DevOps platforms help by introducing prebuilt, compliant workflow templates, secure self-service provisioning for environments, and real-time, AI-supported code review tools. In one case, development teams streamlined dozens of custom scripts into a reusable pipeline that provisioned compliant environments automatically. The result was a noticeable reduction in setup time and greater consistency across projects. Building on this foundation, DevOps also play a vital role in development and deployment of the Machine Learning Models. 


Tackling the DevSecOps Gap in Software Understanding

The big idea in DevSecOps has always been this: shift security left, embed it early and often, and make it everyone’s responsibility. This makes DevSecOps the perfect context for addressing the software understanding gap. Why? Because the best time to capture visibility into your software’s inner workings isn’t after it’s shipped—it’s while it’s being built. ... Software Bill of Materials (SBOMs) are getting a lot of attention—and rightly so. They provide a machine-readable inventory of every component in a piece of software, down to the library level. SBOMs are a baseline requirement for software visibility, but they’re not the whole story. What we need is end-to-end traceability—from code to artifact to runtime. That includes:Component provenance: Where did this library come from, and who maintains it? Build pipelines: What tools and environments were used to compile the software? Deployment metadata: When and where was this version deployed, and under what conditions? ... Too often, the conversation around software security gets stuck on source code access. But as anyone in DevSecOps knows, access to source code alone doesn’t solve the visibility problem. You need insight into artifacts, pipelines, environment variables, configurations, and more. We’re talking about a whole-of-lifecycle approach—not a repo review.


Navigating the Legal Landscape of Generative AI: Risks for Tech Entrepreneurs

The legal framework governing generative AI is still evolving. As the technology continues to advance, the legal requirements will also change. Although the law is still playing catch-up with the technology, several jurisdictions have already implemented regulations specifically targeting AI, and others are considering similar laws. Businesses should stay informed about emerging regulations and adapt their practices accordingly. ... Several jurisdictions have already enacted laws that specifically govern the development and use of AI, and others are considering such legislation. These laws impose additional obligations on developers and users of generative AI, including with respect to permitted uses, transparency, impact assessments and prohibiting discrimination. ... In addition to AI-specific laws, traditional data privacy and security laws – including the EU General Data Protection Regulation (GDPR) and U.S. federal and state privacy laws – still govern the use of personal data in connection with generative AI. For example, under GDPR the use of personal data requires a lawful basis, such as consent or legitimate interest. In addition, many other data protection laws require companies to disclose how they use and disclose personal data, secure the data, conduct data protection impact assessments and facilitate individual rights, including the right to have certain data erased. 


Five ways OSINT helps financial institutions to fight money laundering

By drawing from public data sources available online, such as corporate registries and property ownership records, OSINT tools can provide investigators with a map of intricate corporate and criminal networks, helping them unmask UBOs. This means investigators can work more efficiently to uncover connections between people and companies that they otherwise might not have spotted. ... External intelligence can help analysts to monitor developments, so that newer forms of money laundering create fewer compliance headaches for firms. Some of the latest trends include money muling, where criminals harness channels like social media to recruit individuals to launder money through their bank accounts, and trade-based laundering, which allows bad actors to move funds across borders by exploiting international complexity. OSINT helps identify these emerging patterns, enabling earlier intervention and minimizing enforcement risks. ... When it comes to completing suspicious activity reports (SARs), many financial institutions rely on internal data, spending millions on transaction monitoring, for instance. While these investments are unquestionably necessary, external intelligence like OSINT is often neglected – despite it often being key to identifying bad actors and gaining a full picture of financial crime risk. 


The hard problem in data centres isn’t cooling or power – it’s people

Traditional infrastructure jobs no longer have the allure they once did, with Silicon Valley and startups capturing the imagination of young talent. Let’s be honest – it just isn’t seen as ‘sexy’ anymore. But while people dream about coding the next app, they forget someone has to build and maintain the physical networks that power everything. And that ‘someone’ is disappearing fast. Another factor is that the data centre sector hasn’t done a great job of telling its story. We’re seen as opaque, technical and behind closed doors. Most students don’t even know what a data centre is, and until something breaks  it doesn’t even register. That’s got to change. We need to reframe the narrative. Working in data centres isn’t about grey boxes and cabling. It’s about solving real-world problems that affect billions of people around the world, every single second of every day. ... Fixing the skills gap isn’t just about hiring more people. It’s about keeping the knowledge we already have in the industry and finding ways to pass it on. Right now, we’re on the verge of losing decades of expertise. Many of the engineers, designers and project leads who built today’s data centre infrastructure are approaching retirement. While projects operate at a huge scale and could appear exciting to new engineers, we also have inherent challenges that come with relatively new sectors. 


Multi-party computation is trending for digital ID privacy: Partisia explains why

The main idea is achieving fully decentralized data, even biometric information, giving individuals even more privacy. “We take their identity structure and we actually run the matching of the identity inside MPC,” he says. This means that neither Partisia nor the company that runs the structure has the full biometric information. They can match it without ever decrypting it, Bundgaard explains. Partisia says it’s getting close to this goal in its Japan experiment. The company has also been working on a similar goal of linking digital credentials to biometrics with U.S.-based Trust Stamp. But it is also developing other identity-related uses, such as proving age or other information. ... Multiparty computation protocols are closing that gap: Since all data is encrypted, no one learns anything they did not already know. Beyond protecting data, another advantage is that it still allows data analysts to run computations on encrypted data, according to Partisia. There may be another important role for this cryptographic technique when it comes to privacy. Blockchain and multiparty computation could potentially help lessen friction between European privacy standards, such as eIDAS and GDPR, and those of other countries. “I have one standard in Japan and I travel to Europe and there is a different standard,” says Bundgaard. 


MIT report misunderstood: Shadow AI economy booms while headlines cry failure

While headlines trumpet that “95% of generative AI pilots at companies are failing,” the report actually reveals something far more remarkable: the fastest and most successful enterprise technology adoption in corporate history is happening right under executives’ noses. ... The MIT researchers discovered what they call a “shadow AI economy” where workers use personal ChatGPT accounts, Claude subscriptions and other consumer tools to handle significant portions of their jobs. These employees aren’t just experimenting — they’re using AI “multiples times a day every day of their weekly workload,” the study found. ... Far from showing AI failure, the shadow economy reveals massive productivity gains that don’t appear in corporate metrics. Workers have solved integration challenges that stymie official initiatives, proving AI works when implemented correctly. “This shadow economy demonstrates that individuals can successfully cross the GenAI Divide when given access to flexible, responsive tools,” the report explains. Some companies have started paying attention: “Forward-thinking organizations are beginning to bridge this gap by learning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives.” The productivity gains are real and measurable, just hidden from traditional corporate accounting. 


The Price of Intelligence

Indirect prompt injection represents another significant vulnerability in LLMs. This phenomenon occurs when an LLM follows instructions embedded within the data rather than the user’s input. The implications of this vulnerability are far-reaching, potentially compromising data security, privacy, and the integrity of LLM-powered systems. At its core, indirect prompt injection exploits the LLM’s inability to consistently differentiate between content it should process passively (that is, data) and instructions it should follow. While LLMs have some inherent understanding of content boundaries based on their training, they are far from perfect. ... Jailbreaks represent another significant vulnerability in LLMs. This technique involves crafting user-controlled prompts that manipulate an LLM into violating its established guidelines, ethical constraints, or trained alignments. The implications of successful jailbreaks can potentially undermine the safety, reliability, and ethical use of AI systems. Intuitively, jailbreaks aim to narrow the gap between what the model is constrained to generate, because of factors such as alignment, and the full breadth of what it is technically able to produce. At their core, jailbreaks exploit the flexibility and contextual understanding capabilities of LLMs. While these models are typically designed with safeguards and ethical guidelines, their ability to adapt to various contexts and instructions can be turned against them.


The Strategic Transformation: When Bottom-Up Meets Top-Down Innovation

The most innovative organizations aren’t always purely top-down or bottom-up—they carefully orchestrate combinations of both. Strategic leadership provides direction and resources, while grassroots innovation offers practical insights and the capability to adapt rapidly. Chynoweth noted how strategic portfolio management helps companies “keep their investments in tech aligned to make sure they’re making the right investments.” The key is creating systems that can channel bottom-up innovations while ensuring they support the organization’s strategic objectives. Organizations that succeed in managing both top-down and bottom-up innovation typically have several characteristics. They establish clear strategic priorities from leadership while creating space for experimentation and adaptation. They implement systems for capturing and evaluating innovations regardless of their origin. And they create mechanisms for scaling successful pilots while maintaining strategic alignment. The future belongs to enterprises that can master this balance. Pure top-down enterprises will likely continue to struggle with implementation realities and changing market conditions. In contrast, pure bottom-up organizations would continue to lack the scale and coordination needed for significant impact.


Digital-first doesn’t mean disconnected for this CEO and founder

“Digital-first doesn’t mean disconnected – it means being intentional,” she said. For leaders it creates a culture where the people involved feel supported, wherever they’re working, she thinks. She adds that while many organisations found themselves in a situation where the pandemic forced them to establish a remote-first system, very few actually fully invested in making it work well. “High performance and innovation don’t happen in isolation,” said Feeney. “They happen when people feel connected, supported and inspired.” Sentiments which she explained are no longer nice to have, but are becoming a part of modern organisational infrastructure. One in which people are empowered to do their best work on their own terms. ... “One of the biggest challenges I have faced as a founder was learning to slow down, especially when eager to introduce innovation. Early on, I was keen to implement automation and technology, but I quickly realised that without reliable data and processes, these tools could not reach their full potential.” What she learned was, to do things correctly, you have to stop, review your foundations and processes and when you encounter an obstacle, deal with it, because though the stopping and starting might initially be frustrating, you can’t overestimate the importance of clean data, the right systems and personnel alignment with new tech.

Daily Tech Digest - August 21, 2025


Quote for the day:

"The master has failed more times than the beginner has even tried." -- Stephen McCranie


Ghost Assets Drain 25% of IT Budgets as ITAM Confidence Gap Widens

The survey results reveal fundamental breakdowns in communication, trust, and operational alignment that threaten both current operations and future digital transformation initiatives. ... The survey's most alarming finding centers on ghost assets. These are IT resources that continue consuming budget and creating risk while providing zero business value. The phantom resources manifest across the entire technology stack, from forgotten cloud instances to untracked SaaS subscriptions. ... The tool sprawl paradox is striking. Sixty-five percent of IT managers use six or more ITAM tools yet express confidence in their setup. Non-IT roles use fewer tools but report significantly lower integration confidence. This suggests IT teams have adapted to complexity through process workarounds rather than achieving true operational efficiency. ... "Over the next two to three years, I see this confidence gap continuing to widen," Collins said. "This is primarily fueled by the rapid acceleration of hybrid work models, mass migration to the cloud, and the burgeoning adoption of artificial intelligence, creating a perfect storm of complexity for IT asset management teams." Collins noted that the distributed workforce has shattered the traditional, centralized view of IT assets. Cloud migration introduces shadow IT, ghost assets, and uncontrolled sprawl that bypass traditional procurement channels.


Documents: The architect’s programming language

The biggest bottlenecks in the software lifecycle have nothing to do with code. They’re people problems: communication, persuasion, decision-making. So in order to make an impact, architects have to consistently make those things happen, sprint after sprint, quarter after quarter. How do you reliably get the right people in the right place, at the right time, talking about the right things? Is there a transfer protocol or infrastructure-as-code tool that works on human beings? ... A lot of programmers don’t feel confident in their writing skills, though. It’s hard to switch from something you’re experienced at, where quality speaks for itself (programming) to something you’re unfamiliar with, where quality depends on the reader’s judgment (writing). So what follows is a crash course: just enough information to help you confidently write good (even great) documents, no matter who you are. You don’t have to have an English degree, or know how to spell “idempotent,” or even write in your native language. You just have to learn a few techniques. ... The main thing you want to avoid is a giant wall of text. Often the people whose attention your document needs most are the people with the most demands on their time. If you send them a four-page essay, there’s a good chance they’ll never have the time to get through it. 


CIOs at the Crossroads of Innovation and Trust

Consulting firm McKinsey's Technology Trends Outlook 2025 paints a vivid picture: The CIO is no longer a technologist but one who writes a narrative where technology and strategy merge. Four forces together - artificial intelligence at scale, agentic AI, cloud-edge synergy and digital trust - are a perfect segue for CIOs to navigate the technology forces of the future and turn disruption into opportunities. ... As the attack surface continues to expand due to advances in AI, connected devices and cloud tech - and because the regulatory environment is still in a constant flux - achieving enterprise-level cyber resilience is critical. ... McKinsey's data indicates - and it's no revelation - a global shortage of AI, cloud and security experts. But leading companies are overcoming this bottleneck by upskilling their workers. AI copilots train employees, while digital agents handle repetitive tasks. The boundary between human and machine is blurring, and the CIO is the alchemist, creating hybrid teams that drive transformation. If there's a single plot twist for 2025, it's this: Technology innovation is assessed not by experimentation but by execution. Tech leaders have shifted from chasing shiny objects to demanding business outcomes, from adopting new platforms to aligning every digital investment with growth, efficiency and risk reduction.


Bigger And Faster Or Better And Greener? The EU Needs To Define Its Priorities For AI

Since Europe is currently not clear on its priorities for AI development, US-based Big Tech companies can use their economic and discursive power to push their own ambitions onto Europe. Through publications directly aimed at EU policy-makers, companies promote their services as if they are perfectly aligned with European values. By promising the EU can have it all — bigger, faster, greener and better AI — tech companies exploit this flexible discursive space to spuriously position themselves as “supporters” of the EU’s AI narrative. Two examples may illustrate this: OpenAI and Google. ... Big Tech’s promises to develop AI infrastructure faster while optimizing sustainability, enhancing democracy, and increasing competitiveness seem too good to be true — which in fact they are. Not surprisingly, their claims are remarkably low on details and far removed from the reality of these companies’ immense carbon emissions. Bigger and faster AI is simply incompatible with greener and better AI. And yet, one of the main reasons why Big Tech companies’ claims sound agreeable is that the EU’s AI Continent Action Plan fails to define clear conditions and set priorities in how to achieve better and greener AI. So what kind of changes does the EU AI-CAP need? First, it needs to set clear goalposts on what constitutes a democratic and responsible use of AI, even if this happens at the expense of economic competitiveness. 


Myth Or Reality: Will AI Replace Computer Programmers?

The truth is that the role of the programmer, in line with just about every other professional role, will change. Routine, low-level tasks such as customizing boilerplate code and checking for coding errors will increasingly be done by machines. But that doesn’t mean basic coding skills won’t still be important. Even if humans are using AI to create code, it’s critical that we can understand it and step in when it makes mistakes or does something dangerous. This shows that humans with coding skills will still be needed to meet the requirement of having a “human-in-the-loop”. This is essential for safe and ethical AI, even if its use is restricted to very basic tasks. This means entry-level coding jobs don’t vanish, but instead transition into roles where the ability to automate routine work and augment our skills with AI becomes the bigger factor in the success or failure of a newbie programmer. Alongside this, entirely new development roles will also emerge, including AI project management, specialists in connecting AI and legacy infrastructure, prompt engineers and model trainers. We’re also seeing the emergence of entirely new methods of developing software, using generative AI prompts alone. Recently, this has been named "vibe coding" because of the perceived lack of stress and technical complexity in relation to traditional coding.


FinOps as Code – Unlocking Cloud Cost Optimization

FinOps as Code (FaC) is the practice of applying software engineering principles, particularly those from Infrastructure as Code (IaC) to cloud financial management. It considers financial operations, such as cost management and resource allocation, as code-driven processes that can be automated, version-controlled, and collaborated on between the teams in an organization. FinOps as Code blends financial operations with cloud native practices to optimize and manage cloud spending programmatically using code. It enables FinOps principles and guidelines to be coded directly into the CI/CD pipelines. ... When you bring FinOps into your organization, you know where and how you spend your money. FinOps provides a cultural transformation to your organization where each team member is aware of how their usage of the cloud affects your final costs associated with such usage. While cloud spend is no longer merely an IT issue, you should be able to manage your cloud spend properly. ... FinOps as Code (FaC) is an emerging trend enabling the infusion of FinOps principles in the software development lifecycle using Infrastructure as Code (IaC) and automation. It helps embed cost awareness directly into the development process, encouraging collaboration between engineering and finance teams, and improving cloud resource utilization. Additionally, it also empowers your teams to take ownership of their cloud usage in the organization.


6 IT management practices certain to kill IT productivity

Eliminating multitasking is too much to shoot for, because there are, inevitably, more bits and pieces of work than there are staff to work on them. Also, the political pressure to squeeze something in usually overrules the logic of multitasking less. So instead of trying to stamp it out, attack the problem at the demand side instead of the supply side by enforcing a “Nothing-Is-Free” rule. ... Encourage a “culture of process” throughout your organization. Yes, this is just the headline, and there’s a whole lot of thought and work associated with making it real. Not everything can be reduced to an e-zine article. Sorry. ... If you hold people accountable when something goes wrong, they’ll do their best to conceal the problem from you. And the longer nobody deals with a problem, the worse it gets. ... Whenever something goes wrong, first fix the immediate problem — aka “stop the bleeding.” Then, figure out which systems and processes failed to prevent the problem and fix them so the organization is better prepared next time. And if it turns out the problem really was that someone messed up, figure out if they need better training and coaching, if they just got unlucky, if they took a calculated risk, or if they really are a problem employee you need to punish — what “holding people accountable” means in practice.


Resilience and Reinvention: How Economic Shocks Are Redefining Software Quality and DevOps

Reducing investments in QA might provide immediate financial relief, but it introduces longer-term risks. Releasing software with undetected bugs and security vulnerabilities can quickly erode customer trust and substantially increase remediation costs. History demonstrates that neglected QA efforts during financial downturns inevitably lead to higher expenses and diminished brand reputations due to subpar software releases. ... Automation plays an essential role in filling gaps caused by skills shortages. Organizations worldwide face a substantial IT skills shortage that will cost them $5.5 trillion by 2026, according to an IDC survey of North American IT leaders. ... The complexity of the modern software ecosystem magnifies the impact of economic disruptions. Delays or budget constraints in one vendor can create spillover, causing delays and complications across entire project pipelines. These interconnected dependencies magnify the importance of better operational visibility. Visibility into testing and software quality processes helps teams anticipate these ripple effects. ... Effective resilience strategies focus less on budget increases and more on strategic investment in capabilities that deliver tangible efficiency and reliability benefits. Technologies that support centralized testing, automation, and integrated quality management become critical investments rather than optional expenditures.


Current Debate: Will the Data Center of the Future Be AC or DC?

“DC power has been around in some data centers for about 20 years,” explains Peter Panfil, vice president of global power at Vertiv. “400V and 800V have been utilized in UPS for ages, but what is beginning to emerge to cope with the dynamic load shifts in AI are [new] applications of DC.” ... Several technical hurdles must be overcome before DC achieves broad adoption in the data center. The most obvious challenge is component redesign. Nearly every component – from transformers to breakers – must be re-engineered for DC operation. That places a major burden on transformer, PDU, substation, UPS, converter, regulator, and other electrical equipment suppliers. High-voltage DC also raises safety challenges. Arc suppression and fault isolation are more complex. Internal models are being devised to address this problem with solid-state circuit breakers and hybrid protection schemes. In addition, there is no universal standard for DC distribution in data centers, which complicates interoperability and certification. ... On the sustainability front, DC has a clear edge. DC power results in lower conversion losses, which equate to less wasted energy. Further, DC is more compatible with solar PV and battery storage, reducing long-term Opex and carbon costs.


Weak Passwords and Compromised Accounts: Key Findings from the Blue Report 2025

In the Blue Report 2025, Picus Labs found that password cracking attempts succeeded in 46% of tested environments, nearly doubling the success rate from last year. This sharp increase highlights a fundamental weakness in how organizations are managing – or mismanaging – their password policies. Weak passwords and outdated hashing algorithms continue to leave critical systems vulnerable to attackers using brute-force or rainbow table attacks to crack passwords and gain unauthorized access. Given that password cracking is one of the oldest and most reliably effective attack methods, this finding points to a serious issue: in their race to combat the latest, most sophisticated new breed of threats, many organizations are failing to enforce strong basic password hygiene policies while failing to adopt and integrate modern authentication practices into their defenses. ... The threat of credential abuse is both pervasive and dangerous, yet as the Blue Report 2025 highlights, organizations are still underprepared for this form of attack. And once attackers obtain valid credentials, they can easily move laterally, escalate privileges, and compromise critical systems. Infostealers and ransomware groups frequently rely on stolen credentials to spread across networks, burrowing deeper and deeper, often without triggering detection.