Showing posts with label passwords. Show all posts
Showing posts with label passwords. Show all posts

Daily Tech Digest - August 24, 2025


Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan, but also believe." -- Anatole France



Creating the ‘AI native’ generation: The role of digital skills in education

Boosting AI skills has the potential to drive economic growth and productivity and create jobs, but ambition must be matched with effective delivery. We must ensure AI is integrated into education in a way that encourages students to maintain critical thinking skills, skeptically assess AI outputs, and use it responsibly and ethically. Education should also inspire future tech talent and prepare them for the workplace. ... AI fluency is only one part of the picture. Amid a global skills gap, we also need to capture the imaginations of young people to work in tech. To achieve this, AI and technology education must be accessible, meaningful, and aspirational. That requires coordinated action from schools, industry, and government to promote the real-world impact of digital skills and create clearer, more inspiring pathways into tech careers and expose students how AI is applied in various professions. Early exposure to AI can do far more than build fluency, it can spark curiosity, confidence and career ambition towards high-value sectors like data science, engineering and cybersecurity—areas where the UK must lead. ... Students who learn how to use AI now will build the competencies that industries want and need for years to come. But this will form the first stage of a broader AI learning arc where learning and upskilling become a lifelong mindset, not a single milestone. 


What is the State of SIEM?

In addition to high deployment costs, many organizations grapple with implementing SIEM. A primary challenge is SIEM configuration -- given that the average organization has more than 100 different data sources that must plug into the platform, according to an IDC report. It can be daunting for network staff to do the following when deploying SIEM: Choose which data sources to integrate; Set up SIEM correlation rules that define what will be classified as a security event; and Determine the alert thresholds for specific data and activities. It's equally challenging to manage the information and alerts a SIEM platform issues. If you fine-tune too much, the result might be false positives as the system triggers alarms about events that aren't actually threats. This is a time-stealer for network techs and can lead to staff fatigue and frustration. In contrast, if the calibration is too liberal, organizations run the risk of overlooking something that could be vital. Network staff must also coordinate with other areas of IT and the company. For example, what if data safekeeping and compliance regulations change? Does this change SIEM rule sets? What if the IT applications group rolls out new systems that must be attached to SIEM? Can the legal department or auditors tell you how long to store and retain data for eDiscovery or for disaster backup and recovery? And which data noise can you discard as waste?


AI Data Centers: A Popular Term That’s Hard to Define

The tricky thing about trying to define AI data centers based on characteristics like those described above is that none of those features is unique to AI data centers. For example, hyperscale data centers – meaning very large facilities capable of accommodating more than a hundred thousand servers in some cases – existed before modern AI debuted. AI has made large-scale data centers more important because AI workloads require vast infrastructures, but it’s not as if no one was building large data centers before AI rose to prominence. Likewise, it has long been possible to deploy GPU-equipped servers in data centers. ... Likewise, advanced cooling systems and innovative approaches to data center power management are not unique to the age of generative AI. They, too, predated AI data centers. ... Arguably, an AI data center is ultimately defined by what it does (hosting AI workloads) more than by how it does it. So, before getting hung up on the idea that AI requires investment in a new generation of data centers, it’s perhaps healthier to think about how to leverage the data centers already in existence to support AI workloads. That perspective will help the industry avoid the risk of overinvesting in new data centers designed specifically for AI – and as a bonus, it may save money by allowing businesses to repurpose the data centers they already own to meet their AI needs as well.


Password Managers Vulnerable to Data Theft via Clickjacking

Tóth showed how an attacker can use DOM-based extension clickjacking and the autofill functionality of password managers to exfiltrate sensitive data stored by these applications, including personal data, usernames and passwords, passkeys, and payment card information. The attacks demonstrated by the researcher require 0-5 clicks from the victim, with a majority requiring only one click on a harmless-looking element on the page. The single-click attacks often involved exploitation of XSS or other vulnerabilities. DOM, or Document Object Model, is an object tree created by the browser when it loads an HTML or XML web page. ... Tóth’s attack involves a malicious script that manipulates user interface elements injected by browser extensions into the DOM. “The principle is that a browser extension injects elements into the DOM, which an attacker can then make invisible using JavaScript,” he explained. According to the researcher, some of the vendors have patched the vulnerabilities, but fixes have not been released for Bitwarden, 1Password, iCloud Passwords, Enpass, LastPass, and LogMeOnce. SecurityWeek has reached out to these companies for comment. Bitwarden said a fix for the vulnerability is being rolled out this week with version 2025.8.0. LogMeOnce said it’s aware of the findings and its team is actively working on resolving the issue through a security update.


Iskraemeco India CEO: ERP, AI, and the future of utility leadership

We see a clear convergence ahead, where ERP systems like Infor’s will increasingly integrate with edge AI, embedded IoT, and low-code automation to create intelligent, responsive operations. This is especially relevant in utility scenarios where time-sensitive data must drive immediate action. For instance, our smart kits – equipped with sensor technology – are being designed to detect outages in real time and pinpoint exact failure points, such as which pole needs service during a natural disaster. This type of capability, powered by embedded IoT and edge computing, enables decisions to be made closer to the source, reducing downtime and response lag.  ... One of the most important lessons we've learned is that success in complex ERP deployments is less about customisation and more about alignment, across leadership, teams, and technology. In our case, resisting the urge to modify the system and instead adopting Infor’s best-practice frameworks was key. It allowed us to stay focused, move faster, and ensure long-term stability across all modules. In a multi-stakeholder environment – where regulatory bodies, internal departments, and technology partners are all involved – clarity of direction from leadership made all the difference. When the expectation is clear that we align to the system, and not the other way around, it simplifies everything from compliance to team onboarding.


Experts Concerned by Signs of AI Bubble

"There's a huge boom in AI — some people are scrambling to get exposure at any cost, while others are sounding the alarm that this will end in tears," Kai Wu, founder and chief investment officer of Sparkline Capital, told the Wall Street Journal last year. There are even doubters inside the industry. In July, recently ousted CEO of AI company Stability AI Emad Mostaque told banking analysts that "I think this will be the biggest bubble of all time." "I call it the 'dot AI’ bubble, and it hasn’t even started yet," he added at the time. Just last week, Jeffrey Gundlach, billionaire CEO of DoubleLine Capital, also compared the AI craze to the dot com bubble. "This feels a lot like 1999," he said during an X Spaces broadcast last week, as quoted by Business Insider. "My impression is that investors are presently enjoying the double-top of the most extreme speculative bubble in US financial history," Hussman Investment Trust president John Hussman wrote in a research note. In short, with so many people ringing the alarm bells, there could well be cause for concern. And the consequences of an AI bubble bursting could be devastating. ... While Nvidia would survive such a debacle, the "ones that are likely to bear the brunt of the correction are the providers of generative AI services who are raising money on the promise of selling their services for $20/user/month," he argued.


OpenCUA’s open source computer-use agents rival proprietary models from OpenAI and Anthropic

Computer-use agents are designed to autonomously complete tasks on a computer, from navigating websites to operating complex software. They can also help automate workflows in the enterprise. However, the most capable CUA systems are proprietary, with critical details about their training data, architectures, and development processes kept private. “As the lack of transparency limits technical advancements and raises safety concerns, the research community needs truly open CUA frameworks to study their capabilities, limitations, and risks,” the researchers state in their paper. ... The tool streamlines data collection by running in the background on an annotator’s personal computer, capturing screen videos, mouse and keyboard inputs, and the underlying accessibility tree, which provides structured information about on-screen elements.  ... The key insight was to augment these trajectories with chain-of-thought (CoT) reasoning. This process generates a detailed “inner monologue” for each action, which includes planning, memory, and reflection. This structured reasoning is organized into three levels: a high-level observation of the screen, reflective thoughts that analyze the situation and plan the next steps, and finally, the concise, executable action. This approach helps the agent develop a deeper understanding of the tasks.


How to remember everything

MyMind is a clutter-free bookmarking and knowledge-capture app without folders or manual content organization.There are no templates, manual customizations, or collaboration tools. Instead, MyMind recognizes and formats the content type elegantly. For example, songs, movies, books, and recipes are displayed differently based on MyMind’s detection, regardless of the source, as are pictures and videos. MyMind uses AI to auto-tag everything and allows custom tags. Every word, including those in pictures, is indexed. You can take pictures of information, upload them to MyMind, and find them later by searching a word or two found in the picture. Copying a sentence or paragraph from an article will display the quote with a source link. Every data chunk is captured in a “card.” ... Alongside AI-enabled lifelogging tools like MyMind, we’re also entering an era of lifelogging hardware devices. One promising direction comes from a startup called Brilliant Labs. Its new $299 Halo glasses, available for pre-order and shipping in November, are lightweight AI glasses. The glasses have a long list of features — bone conduction sound, a camera, light weight, etc. — but the lifelogging enabler is an “agentic memory” system called Narrative. It captures information automatically from the camera and microphones and places it into a personal knowledge base. 


From APIs to Digital Twins: Warehouse Integration Strategies for Smarter Supply Chains

Digital twins create virtual replicas of warehouses and supply chains for monitoring and testing. A digital twin ingests live data from IoT sensors, machines, and transportation feeds to simulate how changes affect outcomes. For instance, GE’s “Digital Wind Farm” project feeds sensor data from each turbine into a cloud model, suggesting performance tweaks that boost energy output by ~20% (worth ~$100M more revenue per turbine). In warehousing, digital twins can model workflows (layout changes, staffing shifts, equipment usage) to identify bottlenecks or test improvements before physical changes. Paired with AI, these twins become predictive and prescriptive: companies can run thousands of what-if scenarios (like a port strike or demand surge) and adjust plans accordingly. ... Today’s warehouses are not just storage sheds; they are smart, interconnected nodes in the supply chain. Leveraging IIoT sensors, cloud APIs, AI analytics, robotics, and digital twins transforms logistics into a competitive advantage. Integrated systems reduce manual handoffs and errors: for example, automated picking and instant carrier booking can shorten fulfillment cycles from days to hours. Industry data bear this out, deploying these technologies can improve on-time delivery by ~20% and significantly lower operating costs.


Enterprise Software Spending Surges Despite AI ROI Shortfalls

AI capabilities increasingly drive software purchasing decisions. However, many organizations struggle with the gap between AI promise and practical ROI delivery. The disconnect stems from fundamental challenges in data accessibility and contextual understanding. Current AI implementations face significant obstacles in accessing the full spectrum of contextual data required for complex decision-making. "In complex use cases, where the exponential benefits of AI reside, AI still feels forced and contrived when it doesn't have the same amount and depth of contextual data required to read a situation," Kirkpatrick explained. Effective AI implementation requires comprehensive data infrastructure investments. Organizations must ensure AI models can access approved data sources while maintaining proper guardrails. Many IT departments are still working to achieve this balance. The challenge intensifies in environments where AI needs to integrate across multiple platforms and data sources. Well-trained humans often outperform AI on complex tasks because their experience allows them to read multiple factors and adjust contextually. "For AI to mimic that experience, it requires a wide range of data that can address factors across a wide range of dimensions," Kirkpatrick said. "That requires significant investment in data to ensure the AI has the information it needs at the right time, with the proper context, to function seamlessly, effectively, and efficiently."

Daily Tech Digest - August 23, 2025


Quote for the day:

"Failure is the condiment that gives success its flavor." -- Truman Capote


Enterprise passwords becoming even easier to steal and abuse

Attackers actively target user credentials because they offer the most direct route or foothold into a targeted organization’s network. Once inside, attackers can move laterally across systems, searching for other user accounts to compromise, or they attempt to escalate their privileges and gain administrative control. This hunt for credentials extends beyond user accounts to include code repositories, where developers may have hard-coded access keys and other secrets into application source code. Attacks using valid credentials were successful 98% of the time, according to Picus Security. ... “CISOs and security teams should focus on enforcing strong, unique passwords, using MFA everywhere, managing privileged accounts rigorously and testing identity controls regularly,” Curran says. “Combined with well-tuned DLP and continuous monitoring that can detect abnormal patterns quickly, these measures can help limit the impact of stolen or cracked credentials.” Picus Security’s latest findings reveal a concerning gap between the perceived protection of security tools and their actual performance. An overall protection effectiveness score of 62% contrasts with a shockingly low 3% prevention rate for data exfiltration. “Failures in detection rule configuration, logging gaps and system integration continue to undermine visibility across security operations,” according to Picus Security.


Architecting the next decade: Enterprise architecture as a strategic force

In an age of escalating cyber threats and expanding digital footprints, security can no longer be layered on; it must be architected in from the start. With the rise of AI, IoT and even quantum computing on the horizon, the threat landscape is more dynamic than ever. Security-embedded architectures prioritize identity-first access control, continuous monitoring and zero-trust principles as baseline capabilities. ... Sustainability is no longer a side initiative; it’s becoming a first principle of enterprise architecture. As organizations face pressure from regulators, investors and customers to lower their carbon footprint, digital sustainability is gaining traction as a measurable design objective. From energy-efficient data centers to cloud optimization strategies and greener software development practices, architects are now responsible for minimizing the environmental impact of IT systems. The Green Software Foundation has emerged as a key ecosystem partner, offering measurement standards like software carbon intensity (SCI) and tooling for emissions-aware development pipelines. ... Technology leaders must now foster a culture of innovation, build interdisciplinary partnerships and enable experimentation while ensuring alignment with long-term architectural principles. They must guide the enterprise through both transformation and stability, navigating short-term pressures and long-term horizons simultaneously.


Capitalizing on Digital: Four Strategic Imperatives for Banks and Credit Unions

Modern architectures dissolve the boundary between core and digital. The digital banking solution is no longer a bolt-on to the core; the core and digital come together to form the accountholder experience. That user experience is delivered through the digital channel, but when done correctly, it’s enabled by the modern core. Among other things, the core transformation requires robust use of shared APIs, consistent data structures, and unified development teams. Leading financial institutions are coming to realize that core evaluations now must include an evaluable of their capability to enable the digital experience. Criteria like Availability, Reliability, Real-time, Speed and Security are now emerging as foundational requirements of a core to enable the digital experience. "If your core can’t keep up with your digital, you’re stuck playing catch-up forever," said Jack Henry’s Paul Wiggins, Director of Sales, Digital Engineers. ... Many institutions still operate with digital siloed in one department, while marketing, product, and operations pursue separate agendas. This leads to mismatched priorities — products that aren’t promoted effectively, campaigns that promise features operations can’t support, and technical fixes that don’t address the root cause of customer and member pain points. ... Small-business services are a case in point. Jack Henry’s Strategy Benchmark study found that 80% of CEOs plan to expand these services over the next two years. 


Bentley Systems CIO Talks Leadership Strategy and AI Adoption

The thing that’s really important for a CIO to be thinking about is that we are a microcosm for how all of the business functions are trying to execute the tactics against the strategy. What we can do across the portfolio is represent the strategy in real terms back to the business. We can say: These are all of the different places where we're thinking about investing. Does that match with the strategy we thought we were setting for ourselves? And where is there a delta and a difference? ... When I got my first CIO role, there was all of this conversation about business process. That was the part that I had to learn and figure out how to map into these broader, strategic conversations. I had my first internal IT role at Deutsche Bank, where we really talked about product model a lot -- thinking about our internal IT deliverables as products. When I moved to Lenovo, we had very rich business process and transformation conversations because we were taking the whole business through such a foundational change. I was able to put those two things together. It was a marriage of several things: running a product organization; marrying that to the classic IT way of thinking about business process; and then determining how that becomes representative to the business strategy.


What Is Active Metadata and Why Does It Matter?

Active metadata addresses the shortcomings of passive approaches by automatically updating the metadata whenever an important aspect of the information changes. Defining active metadata and understanding why it matters begins by looking at the shift in organizations’ data strategies from a focus on data acquisition to data consumption. The goal of active metadata is to promote the discoverability of information resources as they are acquired, adapted, and applied over time. ... From a data consumer’s perspective, active metadata adds depth and breadth to their perception of the data that fuels their decision-making. By highlighting connections between data elements that would otherwise be hidden, active metadata promotes logical reasoning about data assets. This is especially so when working on complex problems that involve a large number of disconnected business and technical entities.The active metadata analytics workflow orchestrates metadata management across platforms to enhance application integration, resource management, and quality monitoring. It provides a single, comprehensive snapshot of the current status of all data assets involved in business decision-making. The technology augments metadata with information gleaned from business processes and information systems. 


Godrej Enterprises CHRO on redefining digital readiness as culture, not tech

“Digital readiness at Godrej Enterprises Group is about empowering every employee to thrive in an ever-evolving landscape,” Kaur said. “It’s not just about technology adoption. It’s about building a workforce that is agile, continuously learning, and empowered to innovate.” This reframing reflects a broader trend across Indian industry, where digital transformation is no longer confined to IT departments but runs through every layer of an organisation. For Godrej Enterprises Group, this means designing a workplace where intrapreneurship is rewarded, innovation is constant, and employees are trained to think beyond immediate functions. ... “We’ve moved away from one-off training sessions to creating a dynamic ecosystem where learning is accessible, relevant, and continuous,” she said. “Learning is no longer a checkbox — it’s a shared value that energises our people every day.” This shift is underpinned by leadership development programmes and innovation platforms, ensuring that employees at every level are encouraged to experiment and share knowledge.  ... “We see digital skilling as a core business priority, not just an HR or L&D initiative,” she said. “By making digital skilling a shared responsibility, we foster a culture where learning is continuous, progress is visible, and success is celebrated across the organisation.”


AI is creeping into the Linux kernel - and official policy is needed ASAP

However, before you get too excited, he warned: "This is a great example of what LLMs are doing right now. You give it a small, well-defined task, and it goes and does it. And you notice that this patch isn't, 'Hey, LLM, go write me a driver for my new hardware.' Instead, it's very specific -- convert this specific hash to use our standard API." Levin said another AI win is that "for those of us who are not native English speakers, it also helps with writing a good commit message. It is a common issue in the kernel world where sometimes writing the commit message can be more difficult than actually writing the code change, and it definitely helps there with language barriers." ... Looking ahead, Levin suggested LLMs could be trained to become good Linux maintainer helpers: "We can teach AI about kernel-specific patterns. We show examples from our codebase of how things are done. It also means that by grounding it into our kernel code base, we can make AI explain every decision, and we can trace it to historical examples." In addition, he said the LLMs can be connected directly to the Linux kernel Git tree, so "AI can go ahead and try and learn things about the Git repo all on its own." ... This AI-enabled program automatically analyzes Linux kernel commits to determine whether they should be backported to stable kernel trees. The tool examines commit messages, code changes, and historical backporting patterns to make intelligent recommendations.


Applications and Architecture – When It’s Not Broken, Should You Try to Fix It?

No matter how reliable your application components are, they will need to be maintained, upgraded or replaced at some point. As elements in your application evolve, some will reach end of life status – for example, Redis 7.2 will reach end of life status for security updates in February 2026. Before that point, it’s necessary to assess the available options. For businesses in some sectors like financial services, running out of date and unsupported software is a potential failure for regulations on security and resilience. For example, the Payment Card Industry Data Security Standard version 4.0 enforces that teams should check all their software and hardware is supported every year; in the case of end of life software, teams must also provide a full plan for migration that will be completed within twelve months. ... For developers and software architects, understanding the role that any component plays in the overall application makes it easier to plan ahead. Even the most reliable and consistent component may need to change given outside circumstances. In the Discworld series, golems are so reliable that they become the standard for currency; at the same time, there are so many of them that any problem could affect the whole economy. When it comes to data caching, Redis has been a reliable companion for many developers. 


From cloud migration to cloud optimization

The report, based on insights from more than 2,000 IT leaders, reveals that a staggering 94% of global IT leaders struggle with cloud cost optimization. Many enterprises underestimate the complexities of managing public cloud resources and the inadvertent overspending that occurs from mismanagement, overprovisioning, or a lack of visibility into resource usage. This inefficiency goes beyond just missteps in cloud adoption. It also highlights how difficult it is to align IT cost optimization with broader business objectives. ... This growing focus sheds light on the rising importance of finops (financial operations), a practice aimed at bringing greater financial accountability to cloud spending. Adding to this complexity is the increasing adoption of artificial intelligence and automation tools. These technologies drive innovation, but they come with significant associated costs. ... The argument for greater control is not new, but it has gained renewed relevance when paired with cost optimization strategies. ... With 41% of respondents’ IT budgets still being directed to scaling cloud capabilities, it’s clear that the public cloud will remain a cornerstone of enterprise IT in the foreseeable future. Cloud services such as AI-powered automation remain integral to transformative business strategies, and public cloud infrastructure is still the preferred environment for dynamic, highly scalable workloads. Enterprises will need to make cloud deployments truly cost-effective.


The Missing Layer in AI Infrastructure: Aggregating Agentic Traffic

Software architects and engineering leaders building AI-native platforms are starting to notice familiar warning signs: sudden cost spikes on AI API bills, bots with overbroad permissions tapping into sensitive data, and a disconcerting lack of visibility or control over what these AI agents are doing. It’s a scenario reminiscent of the early days of microservices – before we had gateways and meshes to restore order – only now the "microservices" are semi-autonomous AI routines. Gartner has begun shining a spotlight on this emerging gap. ... Every major shift in software architecture eventually demands a mediation layer to restore control. When web APIs took off, API gateways became essential for managing authentication/authorization, rate limits, and policies. With microservices, service meshes emerged to govern internal traffic. Each time, the need only became clear once the pain of scale surfaced. Agentic AI is on the same path. Teams are wiring up bots and assistants that call APIs independently - great for demos ... So, what exactly is an AI Gateway? At its core, it’s a middleware component – either a proxy, service, or library – through which all AI agent requests to external services are channeled. Rather than letting each agent independently hit whatever API it wants, you route those calls via the gateway, which can then enforce policies and provide central management.



Daily Tech Digest - August 21, 2025


Quote for the day:

"The master has failed more times than the beginner has even tried." -- Stephen McCranie


Ghost Assets Drain 25% of IT Budgets as ITAM Confidence Gap Widens

The survey results reveal fundamental breakdowns in communication, trust, and operational alignment that threaten both current operations and future digital transformation initiatives. ... The survey's most alarming finding centers on ghost assets. These are IT resources that continue consuming budget and creating risk while providing zero business value. The phantom resources manifest across the entire technology stack, from forgotten cloud instances to untracked SaaS subscriptions. ... The tool sprawl paradox is striking. Sixty-five percent of IT managers use six or more ITAM tools yet express confidence in their setup. Non-IT roles use fewer tools but report significantly lower integration confidence. This suggests IT teams have adapted to complexity through process workarounds rather than achieving true operational efficiency. ... "Over the next two to three years, I see this confidence gap continuing to widen," Collins said. "This is primarily fueled by the rapid acceleration of hybrid work models, mass migration to the cloud, and the burgeoning adoption of artificial intelligence, creating a perfect storm of complexity for IT asset management teams." Collins noted that the distributed workforce has shattered the traditional, centralized view of IT assets. Cloud migration introduces shadow IT, ghost assets, and uncontrolled sprawl that bypass traditional procurement channels.


Documents: The architect’s programming language

The biggest bottlenecks in the software lifecycle have nothing to do with code. They’re people problems: communication, persuasion, decision-making. So in order to make an impact, architects have to consistently make those things happen, sprint after sprint, quarter after quarter. How do you reliably get the right people in the right place, at the right time, talking about the right things? Is there a transfer protocol or infrastructure-as-code tool that works on human beings? ... A lot of programmers don’t feel confident in their writing skills, though. It’s hard to switch from something you’re experienced at, where quality speaks for itself (programming) to something you’re unfamiliar with, where quality depends on the reader’s judgment (writing). So what follows is a crash course: just enough information to help you confidently write good (even great) documents, no matter who you are. You don’t have to have an English degree, or know how to spell “idempotent,” or even write in your native language. You just have to learn a few techniques. ... The main thing you want to avoid is a giant wall of text. Often the people whose attention your document needs most are the people with the most demands on their time. If you send them a four-page essay, there’s a good chance they’ll never have the time to get through it. 


CIOs at the Crossroads of Innovation and Trust

Consulting firm McKinsey's Technology Trends Outlook 2025 paints a vivid picture: The CIO is no longer a technologist but one who writes a narrative where technology and strategy merge. Four forces together - artificial intelligence at scale, agentic AI, cloud-edge synergy and digital trust - are a perfect segue for CIOs to navigate the technology forces of the future and turn disruption into opportunities. ... As the attack surface continues to expand due to advances in AI, connected devices and cloud tech - and because the regulatory environment is still in a constant flux - achieving enterprise-level cyber resilience is critical. ... McKinsey's data indicates - and it's no revelation - a global shortage of AI, cloud and security experts. But leading companies are overcoming this bottleneck by upskilling their workers. AI copilots train employees, while digital agents handle repetitive tasks. The boundary between human and machine is blurring, and the CIO is the alchemist, creating hybrid teams that drive transformation. If there's a single plot twist for 2025, it's this: Technology innovation is assessed not by experimentation but by execution. Tech leaders have shifted from chasing shiny objects to demanding business outcomes, from adopting new platforms to aligning every digital investment with growth, efficiency and risk reduction.


Bigger And Faster Or Better And Greener? The EU Needs To Define Its Priorities For AI

Since Europe is currently not clear on its priorities for AI development, US-based Big Tech companies can use their economic and discursive power to push their own ambitions onto Europe. Through publications directly aimed at EU policy-makers, companies promote their services as if they are perfectly aligned with European values. By promising the EU can have it all — bigger, faster, greener and better AI — tech companies exploit this flexible discursive space to spuriously position themselves as “supporters” of the EU’s AI narrative. Two examples may illustrate this: OpenAI and Google. ... Big Tech’s promises to develop AI infrastructure faster while optimizing sustainability, enhancing democracy, and increasing competitiveness seem too good to be true — which in fact they are. Not surprisingly, their claims are remarkably low on details and far removed from the reality of these companies’ immense carbon emissions. Bigger and faster AI is simply incompatible with greener and better AI. And yet, one of the main reasons why Big Tech companies’ claims sound agreeable is that the EU’s AI Continent Action Plan fails to define clear conditions and set priorities in how to achieve better and greener AI. So what kind of changes does the EU AI-CAP need? First, it needs to set clear goalposts on what constitutes a democratic and responsible use of AI, even if this happens at the expense of economic competitiveness. 


Myth Or Reality: Will AI Replace Computer Programmers?

The truth is that the role of the programmer, in line with just about every other professional role, will change. Routine, low-level tasks such as customizing boilerplate code and checking for coding errors will increasingly be done by machines. But that doesn’t mean basic coding skills won’t still be important. Even if humans are using AI to create code, it’s critical that we can understand it and step in when it makes mistakes or does something dangerous. This shows that humans with coding skills will still be needed to meet the requirement of having a “human-in-the-loop”. This is essential for safe and ethical AI, even if its use is restricted to very basic tasks. This means entry-level coding jobs don’t vanish, but instead transition into roles where the ability to automate routine work and augment our skills with AI becomes the bigger factor in the success or failure of a newbie programmer. Alongside this, entirely new development roles will also emerge, including AI project management, specialists in connecting AI and legacy infrastructure, prompt engineers and model trainers. We’re also seeing the emergence of entirely new methods of developing software, using generative AI prompts alone. Recently, this has been named "vibe coding" because of the perceived lack of stress and technical complexity in relation to traditional coding.


FinOps as Code – Unlocking Cloud Cost Optimization

FinOps as Code (FaC) is the practice of applying software engineering principles, particularly those from Infrastructure as Code (IaC) to cloud financial management. It considers financial operations, such as cost management and resource allocation, as code-driven processes that can be automated, version-controlled, and collaborated on between the teams in an organization. FinOps as Code blends financial operations with cloud native practices to optimize and manage cloud spending programmatically using code. It enables FinOps principles and guidelines to be coded directly into the CI/CD pipelines. ... When you bring FinOps into your organization, you know where and how you spend your money. FinOps provides a cultural transformation to your organization where each team member is aware of how their usage of the cloud affects your final costs associated with such usage. While cloud spend is no longer merely an IT issue, you should be able to manage your cloud spend properly. ... FinOps as Code (FaC) is an emerging trend enabling the infusion of FinOps principles in the software development lifecycle using Infrastructure as Code (IaC) and automation. It helps embed cost awareness directly into the development process, encouraging collaboration between engineering and finance teams, and improving cloud resource utilization. Additionally, it also empowers your teams to take ownership of their cloud usage in the organization.


6 IT management practices certain to kill IT productivity

Eliminating multitasking is too much to shoot for, because there are, inevitably, more bits and pieces of work than there are staff to work on them. Also, the political pressure to squeeze something in usually overrules the logic of multitasking less. So instead of trying to stamp it out, attack the problem at the demand side instead of the supply side by enforcing a “Nothing-Is-Free” rule. ... Encourage a “culture of process” throughout your organization. Yes, this is just the headline, and there’s a whole lot of thought and work associated with making it real. Not everything can be reduced to an e-zine article. Sorry. ... If you hold people accountable when something goes wrong, they’ll do their best to conceal the problem from you. And the longer nobody deals with a problem, the worse it gets. ... Whenever something goes wrong, first fix the immediate problem — aka “stop the bleeding.” Then, figure out which systems and processes failed to prevent the problem and fix them so the organization is better prepared next time. And if it turns out the problem really was that someone messed up, figure out if they need better training and coaching, if they just got unlucky, if they took a calculated risk, or if they really are a problem employee you need to punish — what “holding people accountable” means in practice.


Resilience and Reinvention: How Economic Shocks Are Redefining Software Quality and DevOps

Reducing investments in QA might provide immediate financial relief, but it introduces longer-term risks. Releasing software with undetected bugs and security vulnerabilities can quickly erode customer trust and substantially increase remediation costs. History demonstrates that neglected QA efforts during financial downturns inevitably lead to higher expenses and diminished brand reputations due to subpar software releases. ... Automation plays an essential role in filling gaps caused by skills shortages. Organizations worldwide face a substantial IT skills shortage that will cost them $5.5 trillion by 2026, according to an IDC survey of North American IT leaders. ... The complexity of the modern software ecosystem magnifies the impact of economic disruptions. Delays or budget constraints in one vendor can create spillover, causing delays and complications across entire project pipelines. These interconnected dependencies magnify the importance of better operational visibility. Visibility into testing and software quality processes helps teams anticipate these ripple effects. ... Effective resilience strategies focus less on budget increases and more on strategic investment in capabilities that deliver tangible efficiency and reliability benefits. Technologies that support centralized testing, automation, and integrated quality management become critical investments rather than optional expenditures.


Current Debate: Will the Data Center of the Future Be AC or DC?

“DC power has been around in some data centers for about 20 years,” explains Peter Panfil, vice president of global power at Vertiv. “400V and 800V have been utilized in UPS for ages, but what is beginning to emerge to cope with the dynamic load shifts in AI are [new] applications of DC.” ... Several technical hurdles must be overcome before DC achieves broad adoption in the data center. The most obvious challenge is component redesign. Nearly every component – from transformers to breakers – must be re-engineered for DC operation. That places a major burden on transformer, PDU, substation, UPS, converter, regulator, and other electrical equipment suppliers. High-voltage DC also raises safety challenges. Arc suppression and fault isolation are more complex. Internal models are being devised to address this problem with solid-state circuit breakers and hybrid protection schemes. In addition, there is no universal standard for DC distribution in data centers, which complicates interoperability and certification. ... On the sustainability front, DC has a clear edge. DC power results in lower conversion losses, which equate to less wasted energy. Further, DC is more compatible with solar PV and battery storage, reducing long-term Opex and carbon costs.


Weak Passwords and Compromised Accounts: Key Findings from the Blue Report 2025

In the Blue Report 2025, Picus Labs found that password cracking attempts succeeded in 46% of tested environments, nearly doubling the success rate from last year. This sharp increase highlights a fundamental weakness in how organizations are managing – or mismanaging – their password policies. Weak passwords and outdated hashing algorithms continue to leave critical systems vulnerable to attackers using brute-force or rainbow table attacks to crack passwords and gain unauthorized access. Given that password cracking is one of the oldest and most reliably effective attack methods, this finding points to a serious issue: in their race to combat the latest, most sophisticated new breed of threats, many organizations are failing to enforce strong basic password hygiene policies while failing to adopt and integrate modern authentication practices into their defenses. ... The threat of credential abuse is both pervasive and dangerous, yet as the Blue Report 2025 highlights, organizations are still underprepared for this form of attack. And once attackers obtain valid credentials, they can easily move laterally, escalate privileges, and compromise critical systems. Infostealers and ransomware groups frequently rely on stolen credentials to spread across networks, burrowing deeper and deeper, often without triggering detection. 

Daily Tech Digest - June 17, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley



Understanding how data fabric enhances data security and governance

“The biggest challenge is fragmentation; most enterprises operate across multiple cloud environments, each with its own security model, making unified governance incredibly complex,” Dipankar Sengupta, CEO of Digital Engineering Services at Sutherland Global told InfoWorld. ... Shadow IT is also a persistent threat and challenge. According to Sengupta, some enterprises discover nearly 40% of their data exists outside governed environments. Proactively discovering and onboarding those data sources has become non-negotiable. ... A data fabric deepens organizations’ understanding and control of their data and consumption patterns. “With this deeper understanding, organizations can easily detect sensitive data and workloads in potential violation of GDPR, CCPA, HIPAA and similar regulations,” Calvesbert commented. “With deeper control, organizations can then apply the necessary data governance and security measures in near real time to remain compliant.” ... Data security and governance inside a data fabric shouldn’t just be about controlling access to data, it should also come with some form of data validation. The cliched saying “garbage-in, garbage-out” is all too true when it comes to data. After all, what’s the point of ensuring security and governance on data that isn’t valid in the first place?


AI isn’t taking your job; the big threat is a growing skills gap

While AI can boost productivity by handling routine tasks, it can’t replace the strategic roles filled by skilled professionals, Vianello said. To avoid those kinds of issues, agencies — just like companies — need to invest in adaptable, mission-ready teams with continuously updated skills in cloud, cyber, and AI. The technology, he said, should augment – not replace — human teams, automating repetitive tasks while enhancing strategic work. Success in high-demand tech careers starts with in-demand certifications, real-world experience, and soft skills. Ultimately, high-performing teams are built through agile, continuous training that evolves with the tech, Vianello said. “We train teams to use AI platforms like Copilot, Claude and ChatGPT to accelerate productivity,” Vianello said. “But we don’t stop at tools; we build ‘human-in-the-loop’ systems where AI augments decision-making and humans maintain oversight. That’s how you scale trust, performance, and ethics in parallel.” High-performing teams aren’t born with AI expertise; they’re built through continuous, role-specific, forward-looking education, he said, adding that preparing a workforce for AI is not about “chasing” the next hottest skill. “It’s about building a training engine that adapts as fast as technology evolves,” he said.


Got a new password manager? Don't leave your old logins exposed in the cloud - do this next

Those built-in utilities might have been good enough for an earlier era, but they aren't good enough for our complex, multi-platform world. For most people, the correct option is to switch to a third-party password manager and shut down all those built-in password features in the browsers and mobile devices you use. Why? Third-party password managers are built to work everywhere, with a full set of features that are the same (or nearly so) across every device. After you make that switch, the passwords you saved previously are left behind in a cloud service you no longer use. If you regularly switch between browsers (Chrome on your Mac or Windows PC, Safari on your iPhone), you might even have multiple sets of saved passwords scattered across multiple clouds. It's time to clean up that mess. If you're no longer using a password manager, it's prudent to track down those outdated saved passwords and delete them from the cloud. I've studied each of the four leading browsers: Google Chrome, Apple's Safari, Microsoft Edge, and Mozilla Firefox. Here's how to find the password management settings for each one, export any saved passwords to a safe place, and then turn off the feature. As a final step, I explain how to purge saved passwords and stop syncing.


AI and technical debt: A Computer Weekly Downtime Upload podcast

Given that GenAI technology hit the mainstream with GPT 4 two years ago, Reed says: “It was like nothing ever before.” And while the word “transformational” tends to be generously overused in technology he describes generative AI as “transformational with a capital T.” But transformations are not instant and businesses need to understand how to apply GenAI most effectively, and figure out where it does and does not work well. “Every time you hear anything with generative AI, you hear the word journey and we're no different,” he says. “We are trying to understand it. We're trying to understand its capabilities and understand our place with generative AI,” Reed adds. Early adopters are keen to understand how to use GenAI in day-to-day work, which, he says, can range from being an AI-based work assistant or a tool that changes the way people search for information to using AI as a gateway to the heavy lifting required in many organisations. He points out that bet365 is no different. “We have a sliding scale of ambition, but obviously like anything we do in an organisation of this size, it must be measured, it must be understood and we do need to be very, very clear what we're using generative AI for.” One of the very clear use cases for GenAI is in software development. 


Cloud Exodus: When to Know It's Time to Repatriate Your Workloads

Because of the inherent scalability of cloud resources, the cloud makes a lot of sense when the compute, storage, and other resources your business needs fluctuate constantly in volume. But if you find that your resource consumption is virtually unchanged from month to month or year to year, you may not need the cloud. You may be able to spend less and enjoy more control by deploying on-prem infrastructure. ... Cloud costs will naturally fluctuate over time due to changes in resource consumption levels. It's normal if cost increases correlate with usage increases. What's concerning, however, is a spike in cloud costs that you can't tie to consumption changes. It's likely in that case that you're spending more either because your cloud service provider raised its prices or your cloud environment is not optimized from a cost perspective. ... You can reduce latency (meaning the delay between when a user requests data on the network and when it arrives) on cloud platforms by choosing cloud regions that are geographically proximate to your end users. But that only works if your users are concentrated in certain areas, and if cloud data centers are available close to them. If this is not the case, you are likely to run into latency issues, which could dampen the user experience you deliver. 


The future of data center networking and processing

The optical-to-electrical conversion that is performed by the optical transceiver is still needed in a CPO system, but it moves from a pluggable module located at the faceplate of the switching equipment to a small chip (or chiplet) that is co-packaged very closely to the target ICs inside the box. Data center chipset heavyweights Broadcom and Nvidia have both announced CPO-based data center networking products operating at 51.2 and 102.4 Tb/s. ... Early generation CPO systems, such as those announced by Broadcom and Nvidia for Ethernet switching, make use of high channel count fiber array units (FAUs) that are designed to precisely align the fiber cores to their corresponding waveguides inside the PICs. These FAUs are challenging to make as they require high fiber counts, mixed single-mode (SM) and polarization maintaining (PM) fibers, integration of micro-optic components depending on the fiber-to-chip coupling mechanism, highly precise tolerance alignments, CPO-optimized fibers and multiple connector assemblies.  ... In addition to scale and cost benefits, extreme densities can be achieved at the edge of the PIC by bringing the waveguides very close together, down to about 30µm, which is far more than what can be achieved with even the thinnest fibers. Next generation fiber-to-chip coupling will enable GPU optics – which will require unprecedented levels of density and scale.


Align AI with Data, Analytics and Governance to Drive Intelligent, Adaptive Decisions and Actions Across the Organisation

Unlocking AI’s full business potential requires building executive AI literacy. They must be educated on AI opportunities, risks and costs to make effective, future-ready decisions on AI investments that accelerate organisational outcomes. Gartner recommends D&A leaders introduce experiential upskilling programs for executives, such as developing domain-specific prototypes to make AI tangible. This will lead to greater and more appropriate investment in AI capabilities. ... Using synthetic data to train AI models is now a critical strategy for enhancing privacy and generating diverse datasets. However, complexities arise from the need to ensure synthetic data accurately represents real-world scenarios, scales effectively to meet growing data demand and integrates seamlessly with existing data pipelines and systems. “To manage these risks, organisations need effective metadata management,” said Idoine. “Metadata provides the context, lineage and governance needed to track, verify and manage synthetic data responsibly, which is essential to maintaining AI accuracy and meeting compliance standards.” ... Building GenAI models in-house offers flexibility, control and long-term value that many packaged tools cannot match. As internal capabilities grow, Gartner recommends organisations adopt a clear framework for build versus buy decisions. 


Do microServices' Benefits Supersede Their caveats? A Conversation With Sam Newman

A microservice is one of those where it is independently deployable so I can make a change to it and I can roll out new versions of it without having to change any other part of my system. So things like avoiding shared databases are really about achieving that independent deployability. And it's a really simple idea that can be quite easy to implement if you know about it from the beginning. It can be difficult to implement if you're already in a tangled mess. And that idea of independent deployability has interesting benefits because the fact that something is independently deployable is obviously useful because it's low impact releases, but there's loads of other benefits that start to flow from that. ... The vast majority of people who tell me they've scaling issues often don't have them. They could solve their scaling issues with a monolith, no problem at all, and it would be a more straightforward solution. They're typically organizational scale issues. And so, for me, what the world needs from our IT's product-focused, outcome-oriented, and more autonomous teams. That's what we need, and microservices are an enabler for that. Having things like team topologies, which of course, although the DevOps topology stuff was happening around the time of my first edition of my book, that being kind of moved into the team topology space by Matthew and Manuel around the second edition again sort of helps kind of crystallize a lot of those concepts as well.


Why Businesses Must Upgrade to an AI-First Connected GRC System

Adopting a connected GRC solution enables organizations to move beyond siloed operations by bringing risk and compliance functions onto a single, integrated platform. It also creates a unified view of risks and controls across departments, bringing better workflows and encouraging collaboration. With centralized data and shared visibility, managing complex, interconnected risks becomes far more efficient and proactive. In fact, this shift toward integration reflects a broader trend that is seen in the India Regulatory Technology Business Report 2024–2029 findings, which highlight the growing adoption of compliance automation, AI, and machine learning in the Indian market. The report points to a future where GRC is driven by data, merging operations, technology, and control into a single, intelligent framework. ... An AI-first, connected GRC solution takes the heavy lifting out of compliance. Instead of juggling disconnected systems and endless updates, it brings everything together, from tracking regulations to automating actions to keeping teams aligned. For compliance teams, that means less manual work and more time to focus on what matters. ... A smart, integrated GRC solution brings everything into one place. It helps organizations run more smoothly by reducing errors and simplifying teamwork. It also means less time spent on admin and better use of people and resources where they are really needed.


The Importance of Information Sharing to Achieve Cybersecurity Resilience

Information sharing among different sectors predominantly revolves around threats related to phishing, vulnerabilities, ransomware, and data breaches. Each sector tailors its approach to cybersecurity information sharing based on regulatory and technological needs, carefully considering strategies that address specific risks and identify resolution requirements. However, for the mobile industry, information sharing relating to cyberattacks on the networks themselves and misuse of interconnection signalling are also the focus of significant sharing efforts. Industries learn from each other by adopting sector-specific frameworks and leveraging real-time data to enhance their cybersecurity posture. This includes real-time sharing of indicators of compromise (IoCs) and the techniques, tactics, and procedures (TTPs) associated with phishing campaigns. An example of this is the recently launched Stop Scams UK initiative, bringing together tech, telecoms and finance industry leaders, who are going to share real-time data on fraud indicators to enhance consumer protection and foster economic security. This is an important development, as without cross-industry information sharing, determining whether a cybersecurity attack campaign is sector-specific or indiscriminate becomes difficult. 

Daily Tech Digest - June 01, 2025


Quote for the day:

"You are never too old to set another goal or to dream a new dream." -- C.S. Lewis


A wake-up call for real cloud ROI

To make cloud spending work for you, the first step is to stop, assess, and plan. Do not assume the cloud will save money automatically. Establish a meticulous strategy that matches workloads to the right environments, considering both current and future needs. Take the time to analyze which applications genuinely benefit from the public cloud versus alternative options. This is essential for achieving real savings and optimal performance. ... Enterprises should rigorously review their existing usage, streamline environments, and identify optimization opportunities. Invest in cloud management platforms that can automate the discovery of inefficiencies, recommend continuous improvements, and forecast future spending patterns with greater accuracy. Optimization isn’t a one-time exercise—it must be an ongoing process, with automation and accountability as central themes. Enterprises are facing mounting pressure to justify their escalating cloud spend and recapture true business value from their investments. Without decisive action, waste will continue to erode any promised benefits. ... In the end, cloud’s potential for delivering economic and business value is real, but only for organizations willing to put in the planning, discipline, and governance that cloud demands. 


Why IT-OT convergence is a gamechanger for cybersecurity

The combination of IT and OT is a powerful one. It promises real-time visibility into industrial systems, predictive maintenance that limits downtime and data-driven decision making that gives everything from supply chain efficiency to energy usage a boost. When IT systems communicate directly with OT devices, businesses gain a unified view of operations – leading to faster problem solving, fewer breakdowns, smarter automation and better resource planning. This convergence also supports cost reduction through more accurate forecasting, optimised maintenance and the elimination of redundant technologies. And with seamless collaboration, IT and OT teams can now innovate together, breaking down silos that once slowed progress. Cybersecurity maturity is another major win. OT systems, often built without security in mind, can benefit from established IT protections like centralised monitoring, zero-trust architectures and strong access controls. Concurrently, this integration lays the foundation for Industry 4.0 – where smart factories, autonomous systems and AI-driven insights thrive on seamless IT-OT collaboration. ... The convergence of IT and OT isn’t just a tech upgrade – it’s a transformation of how we operate, secure and grow in our interconnected world. But this new frontier demands a new playbook that combines industrial knowhow with cybersecurity discipline.


How To Measure AI Efficiency and Productivity Gains

Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview. ... The challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained." ... Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. 


The discipline we never trained for: Why spiritual quotient is the missing link in leadership

Spiritual Quotient (SQ) is the intelligence that governs how we lead from within. Unlike IQ or EQ, SQ is not about skill—it is about state. It reflects a leader’s ability to operate from deep alignment with their values, to stay centred amid volatility and to make decisions rooted in clarity rather than compulsion. It shows up in moments when the metrics don’t tell the full story, when stakeholders pull in conflicting directions. When the team is watching not just what you decide, but who you are while deciding it. It’s not about belief systems or spirituality in a religious sense; it’s about coherence between who you are, what you value, and how you lead. At its core, SQ is composed of several interwoven capacities: deep self-awareness, alignment with purpose, the ability to remain still and present amid volatility, moral discernment when the right path isn’t obvious, and the maturity to lead beyond ego. ... The workplace in 2025 is not just hybrid—it is holographic. Layers of culture, technology, generational values and business expectations now converge in real time. AI challenges what humans should do. Global disruptions challenge why businesses exist. Employees are no longer looking for charismatic heroes. They’re looking for leaders who are real, reflective and rooted.


Microsoft Confirms Password Deletion—Now Just 8 Weeks Away

The company’s solution is to first move autofill and then any form of password management to Edge. “Your saved passwords (but not your generated password history) and addresses are securely synced to your Microsoft account, and you can continue to access them and enjoy seamless autofill functionality with Microsoft Edge.” Microsoft has added an Authenticator splash screen with a “Turn on Edge” button as its ongoing campaign to switch users to its own browser continues. It’s not just with passwords, of course, there are the endless warnings and nags within Windows and even pointers within security advisories to switch to Edge for safety and security. ... Microsoft wants users to delete passwords once that’s done, so no legacy vulnerability remains, albeit Google has not gone quite that far as yet. You do need to remove SMS 2FA though, and use an app or key-based code at a minimum. ... Notwithstanding these Authenticator changes, Microsoft users should use this as a prompt to delete passwords and replace them with passkeys, per the Windows-makers’ advice. This is especially true given increasing reports of two-factor authentication (2FA) bypasses that are increasingly rendering basics forms of 2FA redundant.


Sustainable cyber risk management emerges as industrial imperative as manufacturers face mounting threats

The ability of a business to adjust, absorb, and continue operating under pressure is becoming a performance metric in and of itself. It is measured not only in uptime or safety statistics. It’s not a technical checkbox; it’s a strategic commitment that is becoming the new baseline for industrial trust and continuity. At the heart of this change lies security by design. Organizations are working to integrate security into OT environments, working their way up from system architecture to vendor procurement and lifecycle management, rather than adding protections along the way and after deployment. ... The path is made more difficult by the acute lack of OT cyber skills, which could be overcome by employing specialists and establishing long-term pipelines through internal reskilling, knowledge transfer procedures, and partnerships with universities. Building sustainable industrial cyber risk management can be made more organized using the ISA/IEC 62443 industrial cybersecurity standards. Cyber defense is now a continuous, sustainable discipline rather than an after-the-fact response thanks to these widely recognized models, which also allow industries to link risk mitigation to real industrial processes, guarantee system interoperability, and measure progress against common benchmarks.


Design Sprint vs Design Thinking: When to Use Each Framework for Maximum Impact

The Design Sprint is a structured five-day process created by Jake Knapp during his time at Google Ventures. It condenses months of work into a single workweek, allowing teams to rapidly solve challenges, create prototypes, and test ideas with real users to get clear data and insights before committing to a full-scale development effort. Unlike the more flexible Design Thinking approach, a Design Sprint follows a precise schedule with specific activities allocated to each day ...
The Design Sprint operates on the principle of "together alone" – team members work collaboratively during discussions and decision-making, but do individual work during ideation phases to ensure diverse thinking and prevent groupthink. ... Design Thinking is well-suited for broadly exploring problem spaces, particularly when the challenge is complex, ill-defined, or requires extensive user research. It excels at uncovering unmet needs and generating innovative solutions for "wicked problems" that don't have obvious answers. The Design Sprint works best when there's a specific, well-defined challenge that needs rapid resolution. It's particularly effective when a team needs to validate a concept quickly, align stakeholders around a direction, or break through decision paralysis.


Broadcom’s VMware Financial Model Is ‘Ethically Flawed’: European Report

Some of the biggest issues VMware cloud partners and customers in Europe include the company increasing prices after Broadcom axed VMware’s former perpetual licenses and pay-as-you-go monthly pricing models. Another big issue was VMware cutting its product portfolio from thousands of offerings into just a few large bundles that are only available via subscription with a multi-year minimum commitment. “The current VMware licensing model appears to rely on practices that breach EU competition regulations which, in addition to imposing harm on its customers and the European cloud ecosystem, creates a material risk for the company,” said the ECCO in its report. “Their shareholders should investigate and challenge the legality of such model.” Additionally, the ECCO said Broadcom recently made changes to its partnership program that forced partners to choose between either being a cloud service provider or a reseller. “It is common in Europe for CSP to play both [service provider and reseller] roles, thus these new requirements are a further harmful restriction on European cloud service providers’ ability to compete and serve European customers,” the ECCO report said.


Protecting Supply Chains from AI-Driven Risks in Manufacturing

Cybercriminals are notorious for exploiting AI and have set their sights on supply chains. Supply chain attacks are surging, with current analyses indicating a 70% likelihood of cybersecurity incidents stemming from supplier vulnerabilities. Additionally, Gartner projects that by the end of 2025, nearly half of all global organizations will have faced software supply chain attacks. Attackers manipulate data inputs to mislead algorithms, disrupt operations or steal proprietary information. Hackers targeting AI-enabled inventory systems can compromise demand forecasting, causing significant production disruptions and financial losses. ... Continuous validation of AI-generated data and forecasts ensures that AI systems remain reliable and accurate. The “black-box” nature of most AI products, where internal processes remain hidden, demands innovative auditing approaches to guarantee reliable outputs. Organizations should implement continuous data validation, scenario-based testing and expert human review to mitigate the risks of bias and inaccuracies. While black-box methods like functional testing offer some evaluation, they are inherently limited compared to audits of transparent systems, highlighting the importance of open AI development.


What's the State of AI Costs in 2025?

This year's report revealed that 44% of respondents plan to invest in improving AI explainability. Their goals are to increase accountability and transparency in AI systems as well as to clarify how decisions are made so that AI models are more understandable to users. Juxtaposed with uncertainty around ROI, this statistic signals further disparity between organizations' usage of AI and accurate understanding of it. ... Of the companies that use third-party platforms, over 90% reported high awareness of AI-driven revenue. That awareness empowers them to confidently compare revenue and cost, leading to very reliable ROI calculations. Conversely, companies that don't have a formal cost-tracking system have much less confidence that they can correctly determine the ROI of their AI initiatives. ... Even the best-planned AI projects can become unexpectedly expensive if organizations lack effective cost governance. This report highlights the need for companies to not merely track AI spend but optimize it via real-time visibility, cost attribution, and useful insights. Cloud-based AI tools account for almost two-thirds of AI budgets, so cloud cost optimization is essential if companies want to stop overspending. Cost is more than a metric; it's the most strategic measure of whether AI growth is sustainable. As companies implement better cost management practices and tools, they will be able to scale AI in a fiscally responsible way, confidently measure ROI, and prevent financial waste.