Daily Tech Digest - August 24, 2025


Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan, but also believe." -- Anatole France



Creating the ‘AI native’ generation: The role of digital skills in education

Boosting AI skills has the potential to drive economic growth and productivity and create jobs, but ambition must be matched with effective delivery. We must ensure AI is integrated into education in a way that encourages students to maintain critical thinking skills, skeptically assess AI outputs, and use it responsibly and ethically. Education should also inspire future tech talent and prepare them for the workplace. ... AI fluency is only one part of the picture. Amid a global skills gap, we also need to capture the imaginations of young people to work in tech. To achieve this, AI and technology education must be accessible, meaningful, and aspirational. That requires coordinated action from schools, industry, and government to promote the real-world impact of digital skills and create clearer, more inspiring pathways into tech careers and expose students how AI is applied in various professions. Early exposure to AI can do far more than build fluency, it can spark curiosity, confidence and career ambition towards high-value sectors like data science, engineering and cybersecurity—areas where the UK must lead. ... Students who learn how to use AI now will build the competencies that industries want and need for years to come. But this will form the first stage of a broader AI learning arc where learning and upskilling become a lifelong mindset, not a single milestone. 


What is the State of SIEM?

In addition to high deployment costs, many organizations grapple with implementing SIEM. A primary challenge is SIEM configuration -- given that the average organization has more than 100 different data sources that must plug into the platform, according to an IDC report. It can be daunting for network staff to do the following when deploying SIEM: Choose which data sources to integrate; Set up SIEM correlation rules that define what will be classified as a security event; and Determine the alert thresholds for specific data and activities. It's equally challenging to manage the information and alerts a SIEM platform issues. If you fine-tune too much, the result might be false positives as the system triggers alarms about events that aren't actually threats. This is a time-stealer for network techs and can lead to staff fatigue and frustration. In contrast, if the calibration is too liberal, organizations run the risk of overlooking something that could be vital. Network staff must also coordinate with other areas of IT and the company. For example, what if data safekeeping and compliance regulations change? Does this change SIEM rule sets? What if the IT applications group rolls out new systems that must be attached to SIEM? Can the legal department or auditors tell you how long to store and retain data for eDiscovery or for disaster backup and recovery? And which data noise can you discard as waste?


AI Data Centers: A Popular Term That’s Hard to Define

The tricky thing about trying to define AI data centers based on characteristics like those described above is that none of those features is unique to AI data centers. For example, hyperscale data centers – meaning very large facilities capable of accommodating more than a hundred thousand servers in some cases – existed before modern AI debuted. AI has made large-scale data centers more important because AI workloads require vast infrastructures, but it’s not as if no one was building large data centers before AI rose to prominence. Likewise, it has long been possible to deploy GPU-equipped servers in data centers. ... Likewise, advanced cooling systems and innovative approaches to data center power management are not unique to the age of generative AI. They, too, predated AI data centers. ... Arguably, an AI data center is ultimately defined by what it does (hosting AI workloads) more than by how it does it. So, before getting hung up on the idea that AI requires investment in a new generation of data centers, it’s perhaps healthier to think about how to leverage the data centers already in existence to support AI workloads. That perspective will help the industry avoid the risk of overinvesting in new data centers designed specifically for AI – and as a bonus, it may save money by allowing businesses to repurpose the data centers they already own to meet their AI needs as well.


Password Managers Vulnerable to Data Theft via Clickjacking

Tóth showed how an attacker can use DOM-based extension clickjacking and the autofill functionality of password managers to exfiltrate sensitive data stored by these applications, including personal data, usernames and passwords, passkeys, and payment card information. The attacks demonstrated by the researcher require 0-5 clicks from the victim, with a majority requiring only one click on a harmless-looking element on the page. The single-click attacks often involved exploitation of XSS or other vulnerabilities. DOM, or Document Object Model, is an object tree created by the browser when it loads an HTML or XML web page. ... Tóth’s attack involves a malicious script that manipulates user interface elements injected by browser extensions into the DOM. “The principle is that a browser extension injects elements into the DOM, which an attacker can then make invisible using JavaScript,” he explained. According to the researcher, some of the vendors have patched the vulnerabilities, but fixes have not been released for Bitwarden, 1Password, iCloud Passwords, Enpass, LastPass, and LogMeOnce. SecurityWeek has reached out to these companies for comment. Bitwarden said a fix for the vulnerability is being rolled out this week with version 2025.8.0. LogMeOnce said it’s aware of the findings and its team is actively working on resolving the issue through a security update.


Iskraemeco India CEO: ERP, AI, and the future of utility leadership

We see a clear convergence ahead, where ERP systems like Infor’s will increasingly integrate with edge AI, embedded IoT, and low-code automation to create intelligent, responsive operations. This is especially relevant in utility scenarios where time-sensitive data must drive immediate action. For instance, our smart kits – equipped with sensor technology – are being designed to detect outages in real time and pinpoint exact failure points, such as which pole needs service during a natural disaster. This type of capability, powered by embedded IoT and edge computing, enables decisions to be made closer to the source, reducing downtime and response lag.  ... One of the most important lessons we've learned is that success in complex ERP deployments is less about customisation and more about alignment, across leadership, teams, and technology. In our case, resisting the urge to modify the system and instead adopting Infor’s best-practice frameworks was key. It allowed us to stay focused, move faster, and ensure long-term stability across all modules. In a multi-stakeholder environment – where regulatory bodies, internal departments, and technology partners are all involved – clarity of direction from leadership made all the difference. When the expectation is clear that we align to the system, and not the other way around, it simplifies everything from compliance to team onboarding.


Experts Concerned by Signs of AI Bubble

"There's a huge boom in AI — some people are scrambling to get exposure at any cost, while others are sounding the alarm that this will end in tears," Kai Wu, founder and chief investment officer of Sparkline Capital, told the Wall Street Journal last year. There are even doubters inside the industry. In July, recently ousted CEO of AI company Stability AI Emad Mostaque told banking analysts that "I think this will be the biggest bubble of all time." "I call it the 'dot AI’ bubble, and it hasn’t even started yet," he added at the time. Just last week, Jeffrey Gundlach, billionaire CEO of DoubleLine Capital, also compared the AI craze to the dot com bubble. "This feels a lot like 1999," he said during an X Spaces broadcast last week, as quoted by Business Insider. "My impression is that investors are presently enjoying the double-top of the most extreme speculative bubble in US financial history," Hussman Investment Trust president John Hussman wrote in a research note. In short, with so many people ringing the alarm bells, there could well be cause for concern. And the consequences of an AI bubble bursting could be devastating. ... While Nvidia would survive such a debacle, the "ones that are likely to bear the brunt of the correction are the providers of generative AI services who are raising money on the promise of selling their services for $20/user/month," he argued.


OpenCUA’s open source computer-use agents rival proprietary models from OpenAI and Anthropic

Computer-use agents are designed to autonomously complete tasks on a computer, from navigating websites to operating complex software. They can also help automate workflows in the enterprise. However, the most capable CUA systems are proprietary, with critical details about their training data, architectures, and development processes kept private. “As the lack of transparency limits technical advancements and raises safety concerns, the research community needs truly open CUA frameworks to study their capabilities, limitations, and risks,” the researchers state in their paper. ... The tool streamlines data collection by running in the background on an annotator’s personal computer, capturing screen videos, mouse and keyboard inputs, and the underlying accessibility tree, which provides structured information about on-screen elements.  ... The key insight was to augment these trajectories with chain-of-thought (CoT) reasoning. This process generates a detailed “inner monologue” for each action, which includes planning, memory, and reflection. This structured reasoning is organized into three levels: a high-level observation of the screen, reflective thoughts that analyze the situation and plan the next steps, and finally, the concise, executable action. This approach helps the agent develop a deeper understanding of the tasks.


How to remember everything

MyMind is a clutter-free bookmarking and knowledge-capture app without folders or manual content organization.There are no templates, manual customizations, or collaboration tools. Instead, MyMind recognizes and formats the content type elegantly. For example, songs, movies, books, and recipes are displayed differently based on MyMind’s detection, regardless of the source, as are pictures and videos. MyMind uses AI to auto-tag everything and allows custom tags. Every word, including those in pictures, is indexed. You can take pictures of information, upload them to MyMind, and find them later by searching a word or two found in the picture. Copying a sentence or paragraph from an article will display the quote with a source link. Every data chunk is captured in a “card.” ... Alongside AI-enabled lifelogging tools like MyMind, we’re also entering an era of lifelogging hardware devices. One promising direction comes from a startup called Brilliant Labs. Its new $299 Halo glasses, available for pre-order and shipping in November, are lightweight AI glasses. The glasses have a long list of features — bone conduction sound, a camera, light weight, etc. — but the lifelogging enabler is an “agentic memory” system called Narrative. It captures information automatically from the camera and microphones and places it into a personal knowledge base. 


From APIs to Digital Twins: Warehouse Integration Strategies for Smarter Supply Chains

Digital twins create virtual replicas of warehouses and supply chains for monitoring and testing. A digital twin ingests live data from IoT sensors, machines, and transportation feeds to simulate how changes affect outcomes. For instance, GE’s “Digital Wind Farm” project feeds sensor data from each turbine into a cloud model, suggesting performance tweaks that boost energy output by ~20% (worth ~$100M more revenue per turbine). In warehousing, digital twins can model workflows (layout changes, staffing shifts, equipment usage) to identify bottlenecks or test improvements before physical changes. Paired with AI, these twins become predictive and prescriptive: companies can run thousands of what-if scenarios (like a port strike or demand surge) and adjust plans accordingly. ... Today’s warehouses are not just storage sheds; they are smart, interconnected nodes in the supply chain. Leveraging IIoT sensors, cloud APIs, AI analytics, robotics, and digital twins transforms logistics into a competitive advantage. Integrated systems reduce manual handoffs and errors: for example, automated picking and instant carrier booking can shorten fulfillment cycles from days to hours. Industry data bear this out, deploying these technologies can improve on-time delivery by ~20% and significantly lower operating costs.


Enterprise Software Spending Surges Despite AI ROI Shortfalls

AI capabilities increasingly drive software purchasing decisions. However, many organizations struggle with the gap between AI promise and practical ROI delivery. The disconnect stems from fundamental challenges in data accessibility and contextual understanding. Current AI implementations face significant obstacles in accessing the full spectrum of contextual data required for complex decision-making. "In complex use cases, where the exponential benefits of AI reside, AI still feels forced and contrived when it doesn't have the same amount and depth of contextual data required to read a situation," Kirkpatrick explained. Effective AI implementation requires comprehensive data infrastructure investments. Organizations must ensure AI models can access approved data sources while maintaining proper guardrails. Many IT departments are still working to achieve this balance. The challenge intensifies in environments where AI needs to integrate across multiple platforms and data sources. Well-trained humans often outperform AI on complex tasks because their experience allows them to read multiple factors and adjust contextually. "For AI to mimic that experience, it requires a wide range of data that can address factors across a wide range of dimensions," Kirkpatrick said. "That requires significant investment in data to ensure the AI has the information it needs at the right time, with the proper context, to function seamlessly, effectively, and efficiently."

No comments:

Post a Comment