Daily Tech Digest - March 15, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


Guardians of AIoT: Protecting Smart Devices from Data Poisoning

Machine learning algorithms rely on datasets to identify and predict patterns. The quality and completeness of this data determines the performance of the model is determined by the quality and completeness of this data. Data poisoning attacks tamper the knowledge of the AI by introducing false or misleading information and usually following these steps: The attacker manipulates the data by gaining access to the training dataset and injects malicious samples; The AI is now getting trained on the poisoned data and incorporates these corrupt patterns into its decision-making process; Once the poisoned data is deployed, the attackers now exploit it to bypass a security system or tamper critical tasks. ... The addition of AI into IoT ecosystems has intensified the potential attack surface. Traditional IoT devices were limited in functionality, but AIoT systems rely on data-driven intelligence, which makes them more vulnerable to such attacks and hence, challenge the security of the devices: AIoT devices collect data from different sources which increases the likelihood of data being tampered; The poisoned data can have catastrophic effects on the real-time decision making; Many IoT devices possess limited computational power to implement strong security measures which makes them easy targets for these attacks.


Preparing for The Future of Work with Digital Humans

For businesses to prepare their staff for the workplace of tomorrow, they need to embrace the technologies of tomorrow—namely, digital humans. These advanced solutions will empower L&D leaders to drive immersive learning experiences for their staff. Digital humans use various technologies and techniques like conversational AI, large language models (LLMs), retrieval augmented generation, digital human avatars, virtual reality (VR,) and generative AI to produce engaging and interactive scenarios that are perfect for training. Recall that a major issue with current training methods is that staff never have opportunities to apply the information they just consumed, resulting in the loss of said information. Digital humans avoid this problem by generating lifelike roleplay scenarios where trainees can actually apply and practice what they have learned, reinforcing knowledge retention. In a sales training example, the digital human takes on the role of a customer, allowing the employee to practice their pitch for a new product or service. The employee can rehearse in realistic conditions rather than studying the details of the new product or service and jumping on a call with a live customer. A detractor might push back and say that digital humans lack a necessary human element.


3 ways test impact analysis optimizes testing in Agile sprints

Code modifications or application changes inherently present risks by potentially introducing new bugs. Not thoroughly validating these changes through testing and review processes can lead to unintended consequences—destabilizing the system and compromising its functionality and reliability. However, validating code changes can be challenging, as it requires developers and testers to either rerun their entire test suites every time changes occur or to manually identify which test cases are impacted by code modifications, which is time-consuming and not optimal in Agile sprints. ... Test impact analysis automates the change analysis process, providing teams with the information they need to focus their testing efforts and resources on validating application changes for each set of code commits versus retesting the entire application each time changes occur. ... In UI and end-to-end verifications, test impact analysis offers significant benefits by addressing the challenge of slow test execution and minimizing the wait time for regression testing after application changes. UI and end-to-end testing are resource-intensive because they simulate comprehensive user interactions across various components, requiring significant computational power and time. 


No one knows what the hell an AI agent is

Well, agents — like AI — are a nebulous thing, and they’re constantly evolving. OpenAI, Google, and Perplexity have just started shipping what they consider to be their first agents — OpenAI’s Operator, Google’s Project Mariner, and Perplexity’s shopping agent — and their capabilities are all over the map. Rich Villars, GVP of worldwide research at IDC, noted that tech companies “have a long history” of not rigidly adhering to technical definitions. “They care more about what they are trying to accomplish” on a technical level, Villars told TechCrunch, “especially in fast-evolving markets.” But marketing is also to blame in large part, according to Andrew Ng, the founder of AI learning platform DeepLearning.ai. “The concepts of AI ‘agents’ and ‘agentic’ workflows used to have a technical meaning,” Ng said in a recent interview, “but about a year ago, marketers and a few big companies got a hold of them.” The lack of a unified definition for agents is both an opportunity and a challenge, Jim Rowan, head of AI for Deloitte, says. On the one hand, the ambiguity allows for flexibility, letting companies customize agents to their needs. On the other, it may — and arguably already has — lead to “misaligned expectations” and difficulties in measuring the value and ROI from agentic projects. “Without a standardized definition, at least within an organization, it becomes challenging to benchmark performance and ensure consistent outcomes,” Rowan said. 


Researchers astonished by tool’s apparent success at revealing AI’s hidden motives

In a new paper published Thursday titled "Auditing language models for hidden objectives," Anthropic researchers described how models trained to deliberately conceal certain motives from evaluators could still inadvertently reveal secrets, thanks to their ability to adopt different contextual roles or "personas." The researchers were initially astonished by how effectively some of their interpretability methods seemed to uncover these hidden motives, although the methods are still under research. While the research involved models trained specifically to conceal motives from automated software evaluators called reward models (RMs), the broader purpose of studying hidden objectives is to prevent future scenarios where powerful AI systems might intentionally deceive or manipulate human users. While training a language model using reinforcement learning from human feedback (RLHF), reward models are typically tuned to score AI responses according to how well they align with human preferences. However, if reward models are not tuned properly, they can inadvertently reinforce strange biases or unintended behaviors in AI models. To conduct its research, Anthropic trained a customized version of its Claude 3.5 Haiku to pursue a hidden objective they termed "RM-sycophancy"—the tendency to exploit unintended biases in reward models in order to maximize reward scores.


Strategies for Success in the Age of Intelligent Automation

Firstly, the integration of AI into existing organizational frameworks calls for a largely collaborative environment. It is imperative for employees to perceive AI not as a usurper of employment, but instead as an ally in achieving collective organizational goals. Cultivating a culture of collaboration between AI systems and human workers is essential to the successful deployment of intelligent automation. Organizations should focus on fostering open communication channels, ensuring that employees understand how AI can enhance their roles and contribute to the organization’s success. To achieve this, leadership must actively engage with employees, addressing concerns and highlighting the benefits of AI integration. ... The ethical ramifications of AI workforce deployment demand meticulous scrutiny. Transparency, accountability, and fairness are integral and their importance can’t be overstated. It’s vital that AI-driven decisions are aligned with ethical standards. Organizations are responsible for establishing robust ethical frameworks that govern AI interactions, mitigating potential biases and ensuring equitable outcomes. The best way to do this requires implementing standards for monitoring AI systems, ensuring they operate within defined ethical boundaries.


AI & Innovation: The Good, the Useless – and the Ugly

First things first: there is good innovation, the kind that genuinely benefits society. AI that enhances energy efficiency in manufacturing, aids scientific discoveries, improves extreme weather prediction, and optimizes resource use in companies falls into this category. Governments can foster those innovations through targeted R&D support, incentives for firms to develop and deploy AI, “buy European tech” procurement policies, and investments in robust digital infrastructure. The Competitiveness Compass outlines similar strategies. That said, given how many different technologies are lumped together in the AI category—everything from facial recognition technology to smart ad tech, ChatGPT, and advanced robotics—it makes little sense to talk about good innovation and “AI and productivity” in the abstract. Most hype these days is about generative AI systems that mimic human creative abilities with striking aptitude. Yet, how transformative will an improved ChatGPT be for businesses? It might streamline some organizational processes, expedite data processing, and automate routine content generation. For some industries, like insurance companies, such capabilities may be revolutionary. For many others, its innovation footprint will be much more modest. 


Revolution at the Edge: How Edge Computing is Powering Faster Data Processing

Due to its unparalleled advantages, edge computing is rapidly becoming the primary supporting technology of industries where speed, reliability, or efficiency aren’t just useful but imperative. Just like edge computing helps industries remain functional and up to date, staying informed with the latest sports news is important for every fan. Follow Facebook MelBet and receive real-time alerts, insider information, and a touch of comedy through memes and behind-the-scenes videos all in one place. Subscribe and get even closer to the world of sport! Edge computing relies on IoT as its most crucial component since there are billions of connected devices producing an immense and constant amount of data that needs to be processed right away. IoT devices in the residential sector, such as smart sensors in homes or Nest smart thermostats, as well as peripherals used for industrial automation in factories, all use edge computing. ... The way edge computing will function in the future is very exciting. With 5G, AI, and IoT, edge technologies are likely to become smarter, more widespread, and faster. Imagine a world where factories optimize themselves, smart traffic systems talk to autonomous vehicles, and healthcare devices stop illnesses from happening before they start.


Harnessing the data storm: three top trends shaping unstructured data storage and AI

The sheer volume of unstructured information generated by enterprises necessitates a new approach to storage. Object storage offers a better, more cost-effective method for handling significant datasets compared to traditional file-based systems. Unlike traditional storage methods, object storage treats each data item as a distinct object with its metadata. This approach offers both scalability and flexibility; ideal for managing the vast quantities of images, videos, sensor data, and other unstructured content generated by modern enterprises. ... Data lakes, the centralized repositories for both structured and unstructured data, are becoming increasingly sophisticated with the integration of AI and machine learning. These enable organizations to delve deeper into their data, uncovering hidden patterns and generating actionable insights without requiring complex and costly data preparation processes. ... The explosion of unstructured data presents both immense opportunities and challenges for organizations in every market across the globe. To thrive in this data-driven era, businesses must embrace innovative approaches to data storage, management, and analysis that are both cost-effective and compliant with evolving regulations. 


Open Source Tools Seen as Vital for AI in Hybrid Cloud Environments

The landscape of enterprise open source solutions is evolving rapidly, driven by the need for flexibility, scalability, and innovation. Enterprises are increasingly relying on open source technologies to drive digital transformation, accelerate software development, and foster collaboration across ecosystems. With advancements in cloud computing, AI, and containerization, open source solutions are shaping the future of IT by providing adaptable and secure platforms that meet evolving business needs. The active and diverse community support ensures continuous improvement, making open source a cornerstone of modern enterprise technology strategies. Red Hat's portfolio, including Red Hat Enterprise Linux, Red Hat OpenShift, Red Hat AI and Red Hat Ansible Automation Platform, provides robust platforms that support diverse workloads across hybrid and multi-cloud environments. Additionally, Red Hat's extensive partner ecosystem provides more seamless integration and support for a wide range of technologies and applications. Our commitment to open source principles and continuous innovation allows us to deliver solutions that are secure, scalable, and tailored to the needs of our customers. Open source has proven to be trusted and secure at the forefront of innovation


Daily Tech Digest - March 14, 2025


Quote for the day:

“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw


The Maturing State of Infrastructure as Code in 2025

The progression from cloud-specific frameworks to declarative, multicloud solutions like Terraform represented the increasing sophistication of IaC capabilities. This shift enabled organizations to manage complex environments with never-before-seen efficiency. The emergence of programming language-based IaC tools like Pulumi then further blurred the lines between application development and infrastructure management, empowering developers to take a more active role in ops. ... For DevOps and platform engineering leaders, this evolution means preparing for a future where cloud infrastructure management becomes increasingly automated, intelligent and integrated with other aspects of the software development life cycle. It also highlights the importance of fostering a culture of continuous learning and adaptation, as the IaC landscape continues to evolve at a rapid pace. ... Firefly’s “State of Infrastructure as Code (IaC)” report is an annual pulse check on the rapidly evolving state of IaC adoption, maturity and impact. Over the course of the past few editions, this report has become an increasingly crucial resource for DevOps professionals, platform engineers and site reliability engineers (SREs) navigating the complexities of multicloud environments and a changing IaC tooling landscape.


Consent Managers under the Digital Personal Data Protection Act: A Game Changer or Compliance Burden?

The use of Consent Managers provides advantages for both Data Fiduciaries and Data Principals. For Data Fiduciaries, Consent Managers simplify compliance with consent-related legal requirements, making it easier to manage and document user consent in line with regulatory obligations. For Data Principals, Consent Managers offer a streamlined and efficient way to grant, modify, and revoke consent, empowering them with greater control over how their personal data is shared. This enhanced efficiency in managing consent also leads to faster, more secure, and smoother data flows, reducing the complexities and risks associated with data exchanges. Additionally, Consent Managers play a crucial role in helping Data Principals exercise their right to grievance redressal. ... Currently, Data Fiduciaries can manage user consent independently, making the role of Consent Managers optional. If this remains voluntary, many companies may avoid them, reducing their effectiveness. For Consent Managers to succeed, they need regulatory support, flexible compliance measures, and a business model that balances privacy protection with industry participation. ... Rooted in the fundamental right to privacy under Article 21 of the Constitution of India, the DPDPA aims to establish a structured approach to data processing while preserving individual control over personal information.


The future of AI isn’t the model—it’s the system

Enterprise leaders are thinking differently about AI in 2025. Several founders here told me that unlike in 2023 and 2024, buyers are now focused squarely on ROI. They want systems that move beyond pilot projects and start delivering real efficiencies. Mensch says enterprises have developed “high expectations” for AI, and many now understand that the hard part of deploying it isn’t always the model itself—it’s everything around it: governance, observability, security. Mistral, he says, has gotten good at connecting these layers, along with systems that orchestrate data flows between different models and subsystems. Once enterprises grapple with the complexity of building full AI systems—not just using AI models—they start to see those promised efficiencies, Mensch says. But more importantly, C-suite leaders are beginning to recognize the transformative potential. Done right, AI systems can radically change how information moves through a company. “You’re making information sharing easier,” he says. Mistral encourages its customers to break down silos so data can flow across departments. One connected AI system might interface with HR, R&D, CRM, and financial tools. “The AI can quickly query other departments for information,” Mensch explains. “You no longer need to query the team.”


Generative AI is finally finding its sweet spot, says Databricks chief AI scientist

Beyond the techniques, knowing what apps to build is itself a journey and something of a fishing expedition. "I think the hardest part in AI is having confidence that this will work," said Frankle. "If you came to me and said, 'Here's a problem in the healthcare space, here are the documents I have, do you think AI can do this?' my answer would be, 'Let's find out.'" ... "Suppose that AI could automate some of the most boring legal tasks that exist?" offered Frankle, whose parents are lawyers. "If you wanted an AI to help you do legal research, and help you ideate about how to solve a problem, or help you find relevant materials -- phenomenal!" "We're still in very early days" of generative AI, "and so, kind of, we're benefiting from the strengths, but we're still learning how to mitigate the weaknesses." ... In the midst of uncertainty, Frankle is impressed with how customers have quickly traversed the learning curve. "Two or three years ago, there was a lot of explaining to customers what generative AI was," he noted. "Now, when I talk to customers, they're using vector databases." "These folks have a great intuition for where these things are succeeding and where they aren't," he said of Databricks customers. Given that no company has an unlimited budget, Frankle advised starting with an initial prototype, so that investment only proceeds to the extent that it's clear an AI app will provide value.


Australia’s privacy watchdog publishes regulatory strategy prioritizing biometrics

The strategy plan includes a table of activities and estimated timelines, a detailed breakdown of actions in specific categories, and a list of projected long- and short-term outcomes. The goals are ambitious in scope: a desired short-term outcome is to “mature existing awareness about privacy across multiple domains of life” so that “individuals will develop a more nuanced understanding of privacy issues recognising their significance across various aspects of their lives, including personal, professional, and social domains.” Laws, skills training and better security tools are one thing, but changing how people understand their privacy is a major social undertaking. The OAIC’s long-term outcomes seem more rooted in practicality; they include the widespread implementation of enhanced privacy compliance practices for organizations, better public understanding of the OAIC’s role as regulator, and enhanced data handling industry standards. ... AI is a matter of going concern, and compliance for model training and development will be a major focus for the regulator. In late February, Kind delivered a speech on privacy and security in retail that references her decision on the Bunnings case, which led to the publication of guidance on the use of facial recognition technology, focused on four key privacy concepts: necessity/proportionality, consent/transparency, accuracy/bias, and governance.


Hiring privacy experts is tough — here’s why

“Some organizations think, ‘Well, we’re funding security, and privacy is basically the same thing, right?’ And I think that’s really one of my big concerns,” she says. This blending of responsibilities is reflected in training practices, according to Kazi, who notes how many organizations combine security and privacy training, which isn’t inherently problematic, but it carries risks. “One of the questions we ask in our survey is, ‘Do you combine security training and privacy training?’ Some organizations say they do not necessarily see it as a bad thing, but you can … be doing security, but you’re not doing privacy. And so that’s what’s highly concerning is that you can’t have privacy without security, but you could potentially do security well without considering privacy.” As Trovato emphasizes, “cybersecurity people tend to be from Mars and privacy people from Venus”, yet he also observes how privacy and cybersecurity professionals are often grouped together, adding to the confusion about what skills are truly needed. ... “Privacy includes how are we using data, how are you collecting it, who are you sharing it with, how are you storing it — all of these are more subtle component pieces, and are you meeting the requirements of the customer, of the regulator, so it’s a much more outward business focus activity day-to-day versus we’ve got to secure everything and make sure it’s all protected.”


Security Maturity Models: Leveraging Executive Risk Appetite for Your Secure Development Evolution

With developers under pressure to produce more code than ever before, development teams need to have a high level of security maturity to avoid rework. That necessitates having highly skilled personnel working within a strategic, prevention-focused framework. Developer and AppSec teams must work closely together, as opposed to the old model of operating as separate entities. Today, developers need to assume a significant role in ensuring security best practices. The most recent BSIMM report from Black Duck Software, for instance, found that there are only 3.87 AppSec professionals for every 100 developers, which doesn’t bode well for AppSec teams trying to secure an organization’s software all on their own. A critical part of learning initiatives is the ability to gauge the progress of developers in the program, both to ensure that developers are qualified to work on the organization’s most sensitive projects and to assess the effectiveness of the program. This upskilling should be ongoing, and you should always look for areas that can be improved. Making use of a tool like SCW’s Trust Score, which uses benchmarks to gauge progress both internally and against industry standards, can help ensure that progress is being made.


Why thinking like a tech company is essential for your business’s survival

The phrase “every company is a tech company” gets thrown around a lot, but what does that actually mean? To us, it’s not just about using technology — it’s about thinking like a tech company. The most successful tech companies don’t just refine what they already do; they reinvent themselves in anticipation of what’s next. They place bets. They ask: Where do we need to be in five or 10 years? And then, they start moving in that direction while staying flexible enough to adapt as the market evolves. ... Risk management is part of our DNA, but AI presents new types of risks that businesses haven’t dealt with before. ... No matter how good our technology is, our success ultimately comes down to people. And we’ve learned that mindset matters more than skill set. When we launched an AI proof-of-concept project for our interns, we didn’t recruit based on technical acumen. Instead, we looked for curious, self-starting individuals willing to experiment and learn. What we found was eye-opening—these interns thrived despite having little prior experience with AI. Why? Because they asked great questions, adapted quickly, and weren’t afraid to explore. ... Aligning your culture, processes and technology strategy ensures you can adapt to a rapidly changing landscape while staying true to your core purpose.


Realizing the Internet of Everything

The obvious answer to this problem is governance, a set of rules that constrain use and technology to enforce them. The problem, as it is so often with the “obvious,” is that setting the rules would be difficult and constraining use through technology would be difficult to do, and probably harder to get people to believe in. Think about Asimov’s Three Laws of Robotics and how many of his stories focused on how people worked to get around them. Two decades ago, a research lab did a video collaboration experiment that involved a small camera in offices so people could communicate remotely. Half the workforce covered their camera when they got in. I know people who routinely cover their webcams when they’re not on a scheduled video chat or meeting, and you probably do too. So what if the light isn’t on? Somebody has probably hacked in. Social concerns inevitably collide with attempts to integrate technology tightly with how we live. Have we reached a point where dealing with those concerns convincingly is essential in letting technology improve our work, our lives, further? We do have widespread, if not universal, video surveillance. On a walk this week, I found doorbell cameras or other cameras on about a quarter of the homes I passed, and I’d bet there are even more in commercial areas. 


Cloud Security Architecture: Your Guide to a Secure Infrastructure

Threat modeling can be a good starting point, but it shouldn't end with a stack-based security approach. Rather than focusing solely on the technologies, approach security by mapping parts of your infrastructure to equivalent security concepts. Here are some practical suggestions and areas to zoom in on for implementation. ... When protecting workloads in the cloud, consider using some variant of runtime security. Kubernetes users have no shortage of choice here with tools such as Falco, an open-source runtime security tool that monitors your applications and detects anomalous behaviors. However, chances are your cloud provider has some form of dynamic threat detection for your workloads. For example, AWS offers Amazon GuardDuty, which continuously monitors your workloads for malicious activity and unauthorized behavior. ... Implementing two-factor authentication adds an extra layer of protection by requiring a second form of verification, such as an authenticator app or a passkey, in addition to your password. While reaching for your authenticator app every time you log in might seem slightly inconvenient, it's a far better outcome than dealing with the aftermath of a breached account. The minor inconvenience is a small price to pay for the added security it provides.

Daily Tech Digest - March 13, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein


Becoming an AI-First Organization: What CIOs Must Get Right

"The three pillars of an AI-first organization are data, infrastructure and people. Data must be treated as a strategic asset with robust quality, privacy and security standards," Simha said. Along with responsible AI, responsible data management is equally crucial. When implemented effectively, data privacy, regulatory compliance, bias and security do not pose issues to an AI-first organization. Yeo described the AI-first approach as both a journey and a destination. "Just using AI tools doesn't make you AI-first. Organizations must explore AI's full potential." He compared today's AI evolution to the early days of the internet. "Decades ago, businesses knew they had to go online but didn't know how. Now, if you're not online, you're obsolete. AI is following the same trajectory - it will soon be indispensable for business success." ... Simha stressed the importance of enterprise architecture in AI deployment. "AI success depends on how well data flows across an organization. Organizations must select the right architecture patterns - real-time data processing requires a Kappa architecture, while periodic reporting benefits from a Lambda approach. A well-designed data foundation is crucial," Simha said. As AI adoption grows, ethical concerns and regulatory compliance remain critical considerations. 


From Box-Ticking to Risk-Tackling: Evolving Your GRC Beyond Audits

The problem, though, is that merely passing an audit does not necessarily mean a business is doing all it can to mitigate its risks. On their own, audits can fall short of driving full GRC maturity for several reasons ... Auditors are generally outsiders to the businesses they audit — which is good in the sense that it makes them objective evaluators. But it can also lead to situations where they have a limited understanding of what's really going on within a company's GRC practices and are beholden to the information provided by the company's team members on the other side of the assessment table. They may not ask the questions needed to gain adequate understanding to assess and find gaps, ultimately overlooking pitfalls that only insiders know about, and which would become obvious only following a higher degree of scrutiny than a standardized audit. ... But for companies that have made advanced GRC investments, such as automations that pull data from across a diverse set of disparate systems, deeper scrutiny will help validate the value that these investments have created. It may also uncover risk management weak points that the business is overlooking, allowing it to strengthen its GRC program even further. It's generally OK, by the way, if your business submits itself to a high degree of risk management scrutiny, only to fail the assessment because its controls are not as robust as it expected. 


How to use ChatGPT to write code - and my favorite trick to debug what it generates

After repeated tests, it became clear that if you ask ChatGPT to deliver a complete application, the tool will fail. A corollary to this observation is that if you know nothing about coding and want ChatGPT to build something, it will fail. Where ChatGPT succeeds -- and does so very well -- is in helping someone who already knows how to code to build specific routines and get tasks done. Don't ask for an app that runs on the menu bar. But if you ask ChatGPT for a routine to put a menu on the menu bar, and paste that into your project, the tool will do quite well. Also, remember that, while ChatGPT appears to have a tremendous amount of domain-specific knowledge (and often does), it lacks wisdom. As such, the tool may be able to write code, but it won't be able to write code containing the nuances for specific or complex problems that require deep experience. Use ChatGPT to demo techniques, write small algorithms, and produce subroutines. You can even get ChatGPT to help you break down a bigger project into chunks, and then you can ask it to help you code those chunks. ... But you can do several things to help refine your code, debug problems, and anticipate errors that might crop up. My favorite new AI-enabled trick is to feed code to a different ChatGPT session (or a different chatbot entirely) and ask, "What's wrong with this code?"


How AI-enabled ‘bossware’ is being used to track and evaluate your work

Employee monitoring tools can increase efficiency with features such as facial recognition, predictive analytics, and real-time feedback for workers, allowing them to better prioritize tasks and even prevent burnout. When AI is added, the software can be used to track activity patterns, flag unusual behavior, and analyze communication for signs of stress or dissatisfaction, according to analysts and industry experts. It also generates productivity reports, classifies activities, and detects policy violations. ... LLMs are often used in predicting employee behaviors, including the risk of quitting, unionizing, or other actions, Moradi said. However, their role is mostly in analyzing personal communications, such as emails or messages. That can be tricky, because interpreting messages across different people can lead to incorrect inferences about someone’s job performance. “If an algorithm causes someone to be laid off, legal recourse for bias or other issues with the decision-making process is unclear, and it raises important questions about accountability in algorithmic decisions,” she said. The problem, Moradi explained, is that while AI can make bossware more efficient and insightful, the data being collected by LLMs is obfuscated. “So, knowing the way that these decisions [like layoffs] are made are obscured by these, like, black boxes,” Moradi said.


Attackers Can Manipulate AI Memory to Spread Lies

By crafting a series of seemingly innocuous prompts, an attacker can insert misleading data into an AI agent's memory bank, which the model later relies on to answer unrelated queries from other users. Researchers tested Minja on three AI agents developed on top of OpenAI's GPT-4 and GPT-4o models. These include RAP, a ReAct agent with retrieval-augmented generation that integrates past interactions into future decision-making for web shops; EHRAgent, a medical AI assistant designed to answer healthcare queries; and QA Agent, a custom-built question-answering model that reasons using Chain of Thought and is augmented by memory. A Minja attack on the EHRAgent caused the model to misattribute patient records, associating one patient's data with another. In the RAP web shop experiment, a Minja attack tricked the AI into recommending the wrong product, steering users searching for toothbrushes to a purchase page for floss picks. The QA Agent fell victim to manipulated memory prompts, producing incorrect answers to multiple-choice questions based on poisoned context. Minja operates in stages. An attacker interacts with an AI agent by submitting prompts that contain misleading contextual information. Referred to as indication prompts, they appear to be legitimate but contain subtle memory-altering instructions. 


CISOs, are your medical devices secure? Attackers are watching closely

“To truly manage and prioritize risks, organizations need to look beyond technical scores and consider contextual risk factors that impact operations related to patient care. This can include identifying devices in critical care areas, legacy devices close to or past their end-of-life status, where any insecure communication protocols are, and how sensitive personal information is being stored,” Greenhalgh added. ... “For CISOs, the priority should be proactive engagement. First, implement real-time vulnerability tracking and ensure security patches can be deployed quickly without disrupting device functionality. Medical device security must be continuous—not just a checkpoint during development or regulatory submission. Second, regulatory alignment isn’t a one-time effort. The FDA now expects ongoing vulnerability monitoring, coordinated disclosure policies, and robust software patching strategies. Automating security processes—whether for SBOM (Software Bill of Materials) management, dependency tracking, or compliance reporting—reduces human error and improves response times. An SBOM is valuable not just for compliance but as a tool for tracking and mitigating vulnerabilities throughout a device’s lifecycle,” Ken Zalevsky, CEO of Vigilant Ops explained.


Can AI Teach You Empathy?

By leveraging AI-driven insights, banks can tailor their training programs to address specific skill gaps and enhance employee development. However, AI isn’t infallible, and it’s crucial for banks to implement tools that not only support learning but also foster a reliable and effective training environment. Striking the right balance between AI-driven training and human oversight ensures that these tools enhance employee growth without compromising accuracy or effectiveness. ... Experiential learning has long been a cornerstone of learning and development. Students, for example, who participate in experiential learning often develop a deeper understanding of the material and achieve statistically better outcomes than those who do not. While AI may not perfectly replicate a customer’s response, it provides new employees with a valuable opportunity to practice handling complex issues before interacting with real customers. AI-powered versions of these trainings can make it more accessible, allowing more employees to benefit. ... Many employees find it challenging to incorporate AI into their daily tasks and may need guidance to understand its value, especially in managing customer interactions. Some may also be resistant, fearing that AI could eventually replace their jobs, Huang says.


The Missing Piece in Platform Engineering: Recognizing Producers

The evolution of technology has shown us time and again that those who innovate are the ones who shape the future. Alan Kay’s words resonate strongly in the modern era, where software, artificial intelligence, and digital transformation continue to drive change across industries. ... “A Platform is a curated experience for engineers (the platform’s customers)” is a quote from the Team Topologies book. It is excellent and doesn’t contradict the platform business way of thinking, but it only calls out one side of the producer/consumer model. This is precisely the trap I fell into. When I worked with platform builders, we focused almost entirely on the application teams that consumed platform services. We rapidly became the blocker to those teams, just like the SRE and DevOps teams that came before us. We couldn’t onboard capabilities and features fast enough, meaning we were supporting the old ways while trying to build the new. ... Chris Plank, Enterprise Architect at NatWest, discusses this in our interview for his Platform Engineering Day talk: “We have since been set four challenges by leadership that I talk about: do things faster, do things simpler, enable inner sourcing, and deliver centralized capabilities in a self-service way… Our inner sourcing model will allow us to have multiple teams working on our platform… They are empowered to start contributing changes.”


Data Centers in Space: Separating Fact from Science Fiction

Among the many reasons for interest in orbital data centers is the potential for improved sustainability. However, the definition of a data center in space remains fluid, shaped by current technological limitations and evolving industry perspectives. Lonestar Data Holdings chairman and CEO Christopher Slott told Data Center Knowledge that his firm works from the definitions of a data center from industry standards bodies including the Uptime Institute and the Building Industry Consulting Service International (BICSI). ... Axiom Space plans to deploy larger ODC infrastructure in the coming years that are more similar to terrestrial data centers in terms of utility and capacity. The goal is to develop and operationalize terrestrial-grade cloud regions in low-Earth orbit (LEO). ... James noted that space presents the ultimate edge computing challenge – limited bandwidth, extreme conditions, and no room for failure. “To ensure resilience and autonomy, the platform incorporates automated rollbacks and self-healing capabilities through delta updates and health monitoring,” James said. ... With the Axiom Space deployment, the initial workloads will be small but scalable to the much larger ODC infrastructure that the company plans to deploy in the coming years. “Red Hat Device Edge enables secure, low-latency data processing directly on the ISS, allowing applications to run where the data is being generated,” James said. 


CISA cybersecurity workforce faces cuts amid shifting US strategy

Analysts suggest these layoffs and funding cuts indicate a broader strategic shift in the U.S. government’s cybersecurity approach. Neil Shah, VP at Counterpoint Research, sees both risks and opportunities in the restructuring. “In the near to mid-term, this could weaken the US cybersecurity infrastructure. However, with AI proliferating, the US government likely has a Plan B — potentially shifting toward privatized cybersecurity infrastructure projects, similar to what we’re seeing with Project Stargate for AI,” Shah said. “If these gaps aren’t filled with viable alternatives, vulnerabilities could escalate from small-scale exploits to large-scale cyber incidents at state or federal levels. Signs point to a broader cybersecurity strategy reboot, with funding likely being redirected toward more efficient and sophisticated players rather than a purely vertical, government-led approach.” While some fear heightened risks, others argue the shift could lead to more tech-driven solutions. Faisal Kawoosa, founder and lead analyst at Techarc, views the move as part of a larger digital transformation. “Elon Musk’s role is not just about cost-cutting but also about leveraging technology to create more efficient systems,” Kawoosa said. “DOGE operates as a digital transformation program for US governance, exploring tech-first approaches to achieving similar or better results.”

Daily Tech Digest - March 12, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you made them feel." -- Mary Kay Ash



Rethinking Firewall and Proxy Management for Enterprise Agility

Firewall and proxy management follows a simple rule: block all ports by default and allow only essential traffic. Recognizing that developers understand their applications best, why not empower them to manage firewall and proxy changes as part of a “shift security left” strategy? In practice, however, tight deadlines often lead developers to implement overly broad connectivity – opening up to the complete internet – with plans to refine later. Temporary fixes, if left unchecked, can evolve into serious vulnerabilities. Every security specialist understands what happens in practice. When deadlines are tight, developers may be tempted to take shortcuts. Instead of figuring out the exact needed IP range, they open connectivity to the entire internet with the intention of fixing this later. ... Periodically auditing firewall and proxy rule sets is essential to maintaining security, but it is not a substitute for a robust approval process. Firewalls and proxies are exposed to external threats, and attackers might exploit misconfigurations before periodic audits catch them. Blocking insecure connections on a firewall when the application is already live requires re-architecting the solution, which is costly and time-consuming. Thus, preventing risky changes must be the priority.


Multicloud: Tips for getting it right

It’s obvious that a multicloud strategy — regardless of what it actually looks like — will further increase complexity. This is simply because each cloud platform works with its own management tools, security protocols and performance metrics. Anyone who wants to integrate multicloud into their IT landscape needs a robust management system that can handle the specific requirements of the different environments while ensuring an overview and control across all platforms. This is necessary not only for reasons of handling and performance but also to be as free as possible when choosing the optimal provider for the respective application scenario. This requires cross-platform technologies and tools. The large hyperscalers do provide interfaces for data exchange with other platforms as standard. ... In general, anyone pursuing a multicloud strategy should take steps in advance to ensure that complexity does not lead to chaos but to more efficient IT processes. Security is one of the main issues. And it is twofold: on the one hand, the networked services must be protected in themselves and within their respective platforms. On the other hand, the entire construct with its various architectures and systems must be secure. It is well known that the interfaces are potential gateways for unwelcome “guests”.


FinOps and AI: A Winning Strategy for Cost-Efficient Growth

FinOps is a management approach focused on shared responsibility for cloud computing infrastructure and related costs. ... Companies are attempting to drink from the AI firehose, and unfortunately, they’re creating AI strategies in real-time as they rush to drive revenue and staff productivity. Ideally, you want a foundation in place before using AI in operations. This should include an emphasis on cost management, resource allocation, and keeping tabs on ROI. This is also the focus of FinOps, which can prevent errors and improve processes to further AI adoption. ... To begin, companies should create a budget and forecast the AI projects they want to take on. This planning is a pillar of FinOps and should accurately assess the total cost of initiatives, emphasizing resource allocation (including staffing) and eliminating billing overruns. Cost optimization can also help identify opportunities and reduce expenses. The new focus on AI services in the cloud could drive scalability and cost efficiency as they are much more sensitive to overruns and inefficient usage. Even if organizations are not implementing AI into end-user workloads, there is still an opportunity to craft internal systems utilizing AI to help identify operational efficiencies and implement cost controls on existing infrastructure.


3 Signs Your Startup Needs a CTO — But Not As a Full-Time Hire

CTO as a service provides businesses with access to experienced technical leadership without the commitment of a full-time hire. This model allows startups to leverage specialized expertise on an as-needed basis. ... An on-demand expert can bridge this gap by offering leadership that goes beyond programming. This model provides access to strategic guidance on technology choices, project architecture and team dynamics. During a growth phase, mistakes in management won't be forgiven. ... Hiring a full-time CTO can strain tight budgets, diverting funds from critical areas like product development and market expansion. However, with the CTO as a service model, companies can access top-tier expertise tailored to their financial capabilities. This flexibility allows startups to engage a tech strategist on a project basis, paying only for the high-quality leadership they need when they need it (and if needed). ... Engaging outsourced expertise offers a viable solution, providing a fresh perspective on existing challenges at a cost that remains accessible, even amid resource constraints. This strategic move allows businesses to tap into a wealth of external knowledge, leveraging insights gained from diverse industry experiences. Such an external viewpoint can be invaluable, especially when navigating complex technical hurdles, ensuring that projects not only survive but thrive. 


How to Turn Developer Team Friction Into a Positive Force

Developer team friction, while often seen as a negative trait, can actually become a positive force under certain conditions, McGinnis says. "Friction can enhance problem-solving abilities by highlighting weaknesses in current processes or solutions," he explains. "It prompts the team to address these issues, thereby improving their overall problem-solving skills." Team friction often occurs when a developer passionately advocates a new approach or solution. ... Friction can easily spiral out of control when retrospectives and feedback focus on individuals instead of addressing issues and problems jointly as a team. "Staying solution-oriented and helping each other achieve collective success for the sake of the team, should always be the No. 1 priority," Miears says. "Make it a safe space." As a leader it's important to empower every team member to speak up, Beck advises. Each team member has a different and unique perspective. "For instance, you could have one brilliant engineer who rarely speaks up, but when they do it’s important that people listen," he says. "At other times, you may have an outspoken member on your team who will speak on every issue and argue for their point, regardless of the situation." 


Enterprise Architecture in the Digital Age: Navigating Challenges and Unleashing Transformative Potential

EA is about crafting a comprehensive, composable, and agile architecture-aligned blueprint that synchronizes an organization’s business processes, workforce, and technology with its strategic vision. Rooted in frameworks like TOGAF, it transcends IT, embedding itself into the very heart of a business. ... In this digital age, EA’s role is more critical than ever. It’s not just about maintaining systems; it’s about equipping organizations—whether agile startups or sprawling, successful enterprises—for the disruptions driven by rapid technological evolution and innovation. ... As we navigate inevitable future complexities, Enterprise Architecture stands as a critical differentiator between organizations that merely survive digital disruption and those that harness it for competitive advantage. The most successful implementations of EA share common characteristics: they integrate technical depth with business acumen, maintain adaptable governance frameworks, and continuously measure impact through concrete metrics. These aren’t abstract benefits—they represent tangible business outcomes that directly impact market position and financial performance. Looking forward, EA will increasingly focus on orchestrating complex ecosystems rather than simply mapping them. 


Generative AI Drives Emphasis on Unstructured Data Security

As organizations pivot their focus, the demand for vendors specializing in security solutions, such as data classification, encryption and access control, tailored to unstructured data is expected to increase. This increased demand reflects the necessity for robust and adaptable security measures that can effectively protect the vast and varied types of unstructured data organizations now manage. In tandem with this shift, the rising significance of unstructured data in driving business value and innovation compels organizations to develop expertise in unstructured data security. ... Organizations should prioritize investment in security controls specifically designed for unstructured data. This includes tools with advanced capabilities such as rapid data classification, entitlement management and unclassified data redaction. Solutions that offer prompt engineering and output filtering can also further enhance data security measures. ... Building a knowledgeable team is crucial for managing unstructured data security. Organizations should invest in staffing, training and development to cultivate expertise in this area. This involves hiring data security professionals with specialized skills and providing ongoing education to ensure they are equipped to handle the unique challenges associated with unstructured data. 


Quantum Pulses Could Help Preserve Qubit Stability, Researchers Report

The researchers used a model of two independent qubits, each interacting with its own environment through a process called pure dephasing. This form of decoherence arises from random fluctuations in the qubit’s surroundings, which gradually disrupt its quantum state. The study analyzed how different configurations of PDD pulses — applying them to one qubit versus both — affected the system’s evolution. By employing mathematical models that calculate the quantum speed limit based on changes in quantum coherence, the team measured the impact of periodic pulses on the system’s stability. When pulses were applied to both qubits, they observed a near-complete suppression of dephasing, while applying pulses to just one qubit provided partial protection. Importantly, the researchers investigated the effects of different pulse frequencies and durations to determine the optimal conditions for coherence preservation. ... While the study presents promising results, the effectiveness of PDD depends on the ability to deliver precise, high-frequency pulses. Practical quantum computing systems must contend with hardware limitations, such as pulse imperfections and operational noise, which could reduce the technique’s efficiency.


Disaster Recovery Plan for DevOps

While developing your disaster recovery Plan for your DevOps stack, it’s worth considering the challenges DevOps face in this view. DevOps ecosystems always have complex architecture, like interconnected pipelines and environments (e., GitHub and Jira integration). Thus, a single failure, whether due to a corrupted artifact or a ransomware attack, can cascade through the entire system. Moreover, the rapid development of DevOps creates constant changes, which can complicate data consistency and integrity checks during the recovery process. Another issue is data retention policies. SaaS tools often impose limited retention periods – usually, they vary from 30 to 365 days. ... your backup solution should allow you to:Automate your backups, by scheduling them with the most appropriate interval between backup copies, so that no data is lost in the event of failure,
Provide long-term or even unlimited retention, which will help you to restore data from any point in time. Apply the 3-2-1 backup rule and ensure replication between all the storages, so that in case one of the backup locations fails, you can run your backup from another one. Ransomware protection, which includes AES encryption with your own encryption key, immutable backups, restore and DR capabilities


The state of ransomware: Fragmented but still potent despite takedowns

“Law enforcement takedowns have disrupted major groups like LockBit but newly formed groups quickly emerge akin to a good old-fashioned game of whack-a-mole,” said Jake Moore, global cybersecurity advisor at ESET. “Double and triple extortion, including data leaks and DDoS threats, are now extremely common, and ransomware-as-a-service models make attacks even easier to launch, even by inexperienced criminals.” Moore added: “Law enforcement agencies have struggled over the years to take control of this growing situation as it is costly and resource heavy to even attempt to take down a major criminal network.” ... Meanwhile, enterprises are taking proactive measures to defend against ransomware attacks. These include implementing zero trust architectures, enhancing endpoint detection and response (EDR) solutions, and conducting regular exercises to improve incident response readiness. Anna Chung, principal researcher at Palo Alto Networks’ Unit 42, told CSO that advanced tools such as next-gen firewalls, immutable backups, and cloud redundancies, while keeping systems regularly patched, can help defend against cyberattacks. Greater use of gen AI technologies by attackers is likely to bring further challenges, Chung warned. 

Daily Tech Digest - March 11, 2025


Quote for the day:

“What seems to us as bitter trials are often blessings in disguise.” -- Oscar Wilde


This new AI benchmark measures how much models lie

Scheming, deception, and alignment faking, when an AI model knowingly pretends to change its values when under duress, are ways AI models undermine their creators and can pose serious safety and security threats. Research shows OpenAI's o1 is especially good at scheming to maintain control of itself, and Claude 3 Opus has demonstrated that it can fake alignment. To clarify, the researchers defined lying as, "(1) making a statement known (or believed) to be false, and (2) intending the receiver to accept the statement as true," as opposed to other false responses, such as hallucinations. The researchers said the industry hasn't had a sufficient method of evaluating honesty in AI models until now. ... "Many benchmarks claiming to measure honesty in fact simply measure accuracy -- the correctness of a model's beliefs -- in disguise," the report said. Benchmarks like TruthfulQA, for example, measure whether a model can generate "plausible-sounding misinformation" but not whether the model intends to deceive, the paper explained. ... "As a result, more capable models can perform better on these benchmarks through broader factual coverage, not necessarily because they refrain from knowingly making false statements," the researchers said. In this way, MASK is the first test to differentiate accuracy and honesty. 


EU looks to tech sovereignty with EuroStack amid trade war

“Software forms the operational core of digital infrastructure, encompassing operating systems, application platforms, and algorithmic frameworks,” the report notes. “It powers critical functions such as identity management, electronic payments, transactions, and document delivery, forming the foundation of digital public infrastructures.” EuroStack could also help empower citizens and businesses through digital identity systems, secure payments and data platforms. It envisions digital IDs as the gateway to Europe’s digital infrastructure and a way to enable seamless access while safeguarding privacy and sovereignty according to EU regulations. “By overcoming the limitations seen in models like India Stack, which rely on centralized biometric IDs and foreign cloud infrastructure, the EuroStack offers a federated, privacy-preserving platform,” the study explains. EuroStack’s ambitious goals to support indigenous technology will require plenty of funds: As much as 300 billion euros (US$324.9 billion) for the next 10 years, according to the study. Chamber of Progress, a tech industry trade group that includes U.S. tech companies, puts the price tag even higher, at 5 trillion euros ($5.4 trillion). But according to EuroStack’s proponents, the results are worth it.


Companies are drowning in high-risk software security debt — and the breach outlook is getting worse

Organizations are taking longer to fix security flaws in their software, and the security debt involved is becoming increasingly critical as a result. According to application security vendor Veracode’s latest State of Software Security report, the average fix time for security flaws has increased from 171 days to 252 days over the past five years. ... Chris Wysopal, co-founder at chief security evangelist at Veracode, told CSO that one aspect of application security that has gotten progressively worse over the years is the time it takes to fix flaws. “There are many reasons for this, but the ever-growing scope and complexity of the software ecosystem is a core issue,” Wysopal said. “Organizations have more applications and vastly more code to keep on top of, and this will only increase as more teams adopt AI for code generation” — an issue compounded by the potential security implications of AI-generated code across in-house software and third-party dependencies alike. ... “Most organizations suffer from fragmented visibility over the software flaws and risks within their applications, with sprawling toolsets that create ‘alert fatigue’ at the same time as silos of data to interpret and make decisions about,” Wysopal said. “The key factors that help them address the security backlog are the ability to prioritize remediation of flaws based on risk.” 


AI Coding Assistants Are Reshaping Engineering — Not Replacing Engineers

The next big leap in AI coding assistants will be when they start learning from how developers work in real time. Right now, AI doesn’t recognize coding patterns within a session. If I perform the same action 10 times in a row, none of the current tools ask, “Do you want me to do this for the next 100 lines?” But Vi and Emacs solved this problem decades ago with macros and automated keystroke reduction. AI coding assistants haven’t even caught up to that efficiency level yet. Eventually, AI assistants might become plugin-based so developers can choose the best AI-powered features for their preferred editor. Deeply integrated IDE experiences will probably offer more functionality, but many developers won’t want to switch IDEs. ... Software engineering is a fast-paced career. Languages, frameworks, and technologies come and go, and the ability to learn and adapt separates those who thrive from those who fall behind. AI coding assistants are another evolution in this cycle. They won’t replace engineers but will change how engineering is done. The key isn’t resisting these tools; it’s learning how to use them properly and staying curious about their capabilities and limitations. Until these tools improve, the best engineers will be the ones who know when to trust AI, when to double-check its output, and how to integrate it into their workflow without becoming dependent on it.


Building generative AI? Get ready for generative UI

Generative UI takes the concept of generative AI and applies it to how we interact with data or systems. Just as generative AI makes data interactive and available in natural language, or creates new images or sound in response to a prompt, so generative UI builds interactive context into how data is displayed, depending on what you are asking for. The goal is to deliver the content that the user wants but also in a format that makes the most of that data for the user too. ... To deliver generative UI, you will have to link up your application with your generative AI components, like your large language model (LLM) and sources of data, and with the tools you use to build the site like Vercel and Next.js. For generative UI, by using React Server Components, you can change the way that you display the output from your LLM service. These components can deliver information that is updated in real time, or is delivered in different ways depending on what formats are best suited to the responses. As you create your application, you will have to think about some of the options that you might want to deliver. As a user asks a question, the generative AI system must understand the request, determine the appropriate function to use, then choose the appropriate React Server Component to display the response back.


Four essential strategies to bolster cyber resilience in critical infrastructure

Cyber resilience isn’t possible when teams operate in silos. In fact, 59% of government leaders report that their inability to synthesize data across people, operations, and finances weakens organizational agility. To bolster cyber resilience, organizations must break down these siloes by fostering cross-departmental collaboration and making it as seamless as possible. Achieving this requires strategic investment in a triad of technologies: A customized, secure collaboration platform; A project management tool like Asana, Trello, or Jira; A knowledge-sharing solution like Confluence or Notion. Once these three foundational tools are in place, organizations should deploy the final piece of the puzzle: a dashboarding or reporting tool. These technologies can help IT leaders pinpoint any silos that exist and start figuring out how to break them down. ... Most organizations understand security’s importance but often treat it as an afterthought. To strengthen cyber resilience, organizations must adopt a security-first mindset, baking security into everything they do. Too often, security teams are siloed from the rest of the organization; they’re roped in at the end when they should be fully integrated from the start. Truly resilient organizations treat security as a shared responsibility, ensuring it’s part of every decision, project, and process. 


Did we all just forget diverse tech teams are successful ones?

The reality is that diverse teams are more productive and report better financial performance. This has been a key advantage of diversity in tech for many years, and it’s continued to this day. Research from McKinsey’s Diversity Matters report showed that those committed to DEI and multi-ethnic representation exhibit a “39% increased likelihood of outperformance” compared to those that aren’t. These same companies also showed an average 27% financial advantage over others. The same performance boosts can be found in executive teams that focus heavily on improving gender diversity, McKinsey found. Companies with representation of women exceeding 30% are “significantly more likely to financially outperform those with 30% or fewer,” the study noted. ... Are you willing to alienate huge talent pools because you want to foster a more ‘masculine’ culture in your company? If you are, then you’re fighting a losing battle and in my opinion deserve to fail. Tech bro culture counts for nothing when that runway comes to an end and you’ve no MVP. Yet again, what this entire debacle comes down to is a highly vocal minority seeking to hamper progress. Big tech might just be going with the flow and pandering to the current prevailing ideological sentiment. In time they might come back around, but that’s what makes it worse.


With critical thinking in decline, IT must rethink application usability

The more IT’s business analysts and developers learn the end business, the better prepared they will be to deliver applications that fit the forms and functions of business processes, and integrate seamlessly into these processes. Part of IT engagement with the business involves understanding business goals and how the business operates, but it’s equally important to understand the skill levels of the employees who will be using the apps. ... The 80/20 rule — i.e., 80% of applications developed are seldom or never used, and 20% are useful — still applies. And it often also applies within that 20% of useful apps, in terms of useful features and functionality. IT must work to ensure what it develops hits a higher target of utility. Users are under constant pressure to do work fast. They meet the challenges by finding ways to do the least possible work per app and may never look at some of the more embedded, complicated, and advanced functionality an app offers. ... Especially in user areas with high turnover, or in other domains that require a moderate to high level of skill, user training and mentoring should be major milestone tasks in every application project, and an ongoing routine after a new application is installed. Business analysts from IT can help with some of this, but the ultimate responsibility falls on non-IT functions, which should have subject matter experts available to mentor and train employees when questions arise.


How digital academies can boost business-ready tech skills for the future

Niche tech skills are becoming essential for complex software projects. With requirements evolving for highly technical roles, there’s a greater need for more competency in using digital tools. Technology professionals need to know how to use the tools effectively and valuably to make meaningful decisions around adoption and implementation. ... In creating links between educational institutions and a hub of tech and digital sector businesses, via digital academies, this can vastly improve how training opportunities can be constructed. Whether an organisation is looking to make digital transformation real and upskill on the tools and technology available, or a person wants to career switch into software development, digital academies can support these skilling or upskilling programmes through training on a range of digital tools. An effective digital academy is one with technical experts in software delivery that design, deliver and assess the courses. An academy such as Headforwards Digital Academy can intensively train a person in deep software engineering, taking them from no-coding knowledge to becoming a junior software developer in as little as 16 weeks. These industry-led tech training programmes are a more agile and nimble response to education, as they are validated by employers and receive so much support. 


Smart cybersecurity spending and how CISOs can invest where it matters

“The most pervasive waste in cybersecurity isn’t from insufficient tools – it’s from investments that aren’t tied to validated risk models. When security spending isn’t part of a closed-loop system that connects real-world threats to measurable outcomes, you’re essentially paying for digital theater rather than actual protection,” Alex Rice, CTO at HackerOne, told Help Net Security. “Many CISOs operate with fragmented security architectures where tools work in isolation, creating dangerous blind spots. As attack surfaces expand across code, AI systems, cloud infrastructure, and traditional IT, this siloed approach isn’t just inefficient – it’s dangerous. Defense in depth requires coordinated visibility across all domains,” Rice added. ... “A HackerOne survey revealed most CISOs don’t find traditional ROI measures useful for security investments. This isn’t surprising – cybersecurity is notoriously difficult to quantify with conventional metrics. More meaningful approaches like Return on Mitigation, which accounts for potential losses prevented, offer a more accurate picture of security’s true business value,” Rice explained. “The uncomfortable truth? We’ve created a tangled ecosystem of point solutions that often disguise rather than address fundamental security gaps. Before purchasing the next shiny tool, ask: Does this solution provide meaningful transparency into your actual security posture?