Showing posts with label skill-gap. Show all posts
Showing posts with label skill-gap. Show all posts

Daily Tech Digest - June 17, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley



Understanding how data fabric enhances data security and governance

“The biggest challenge is fragmentation; most enterprises operate across multiple cloud environments, each with its own security model, making unified governance incredibly complex,” Dipankar Sengupta, CEO of Digital Engineering Services at Sutherland Global told InfoWorld. ... Shadow IT is also a persistent threat and challenge. According to Sengupta, some enterprises discover nearly 40% of their data exists outside governed environments. Proactively discovering and onboarding those data sources has become non-negotiable. ... A data fabric deepens organizations’ understanding and control of their data and consumption patterns. “With this deeper understanding, organizations can easily detect sensitive data and workloads in potential violation of GDPR, CCPA, HIPAA and similar regulations,” Calvesbert commented. “With deeper control, organizations can then apply the necessary data governance and security measures in near real time to remain compliant.” ... Data security and governance inside a data fabric shouldn’t just be about controlling access to data, it should also come with some form of data validation. The cliched saying “garbage-in, garbage-out” is all too true when it comes to data. After all, what’s the point of ensuring security and governance on data that isn’t valid in the first place?


AI isn’t taking your job; the big threat is a growing skills gap

While AI can boost productivity by handling routine tasks, it can’t replace the strategic roles filled by skilled professionals, Vianello said. To avoid those kinds of issues, agencies — just like companies — need to invest in adaptable, mission-ready teams with continuously updated skills in cloud, cyber, and AI. The technology, he said, should augment – not replace — human teams, automating repetitive tasks while enhancing strategic work. Success in high-demand tech careers starts with in-demand certifications, real-world experience, and soft skills. Ultimately, high-performing teams are built through agile, continuous training that evolves with the tech, Vianello said. “We train teams to use AI platforms like Copilot, Claude and ChatGPT to accelerate productivity,” Vianello said. “But we don’t stop at tools; we build ‘human-in-the-loop’ systems where AI augments decision-making and humans maintain oversight. That’s how you scale trust, performance, and ethics in parallel.” High-performing teams aren’t born with AI expertise; they’re built through continuous, role-specific, forward-looking education, he said, adding that preparing a workforce for AI is not about “chasing” the next hottest skill. “It’s about building a training engine that adapts as fast as technology evolves,” he said.


Got a new password manager? Don't leave your old logins exposed in the cloud - do this next

Those built-in utilities might have been good enough for an earlier era, but they aren't good enough for our complex, multi-platform world. For most people, the correct option is to switch to a third-party password manager and shut down all those built-in password features in the browsers and mobile devices you use. Why? Third-party password managers are built to work everywhere, with a full set of features that are the same (or nearly so) across every device. After you make that switch, the passwords you saved previously are left behind in a cloud service you no longer use. If you regularly switch between browsers (Chrome on your Mac or Windows PC, Safari on your iPhone), you might even have multiple sets of saved passwords scattered across multiple clouds. It's time to clean up that mess. If you're no longer using a password manager, it's prudent to track down those outdated saved passwords and delete them from the cloud. I've studied each of the four leading browsers: Google Chrome, Apple's Safari, Microsoft Edge, and Mozilla Firefox. Here's how to find the password management settings for each one, export any saved passwords to a safe place, and then turn off the feature. As a final step, I explain how to purge saved passwords and stop syncing.


AI and technical debt: A Computer Weekly Downtime Upload podcast

Given that GenAI technology hit the mainstream with GPT 4 two years ago, Reed says: “It was like nothing ever before.” And while the word “transformational” tends to be generously overused in technology he describes generative AI as “transformational with a capital T.” But transformations are not instant and businesses need to understand how to apply GenAI most effectively, and figure out where it does and does not work well. “Every time you hear anything with generative AI, you hear the word journey and we're no different,” he says. “We are trying to understand it. We're trying to understand its capabilities and understand our place with generative AI,” Reed adds. Early adopters are keen to understand how to use GenAI in day-to-day work, which, he says, can range from being an AI-based work assistant or a tool that changes the way people search for information to using AI as a gateway to the heavy lifting required in many organisations. He points out that bet365 is no different. “We have a sliding scale of ambition, but obviously like anything we do in an organisation of this size, it must be measured, it must be understood and we do need to be very, very clear what we're using generative AI for.” One of the very clear use cases for GenAI is in software development. 


Cloud Exodus: When to Know It's Time to Repatriate Your Workloads

Because of the inherent scalability of cloud resources, the cloud makes a lot of sense when the compute, storage, and other resources your business needs fluctuate constantly in volume. But if you find that your resource consumption is virtually unchanged from month to month or year to year, you may not need the cloud. You may be able to spend less and enjoy more control by deploying on-prem infrastructure. ... Cloud costs will naturally fluctuate over time due to changes in resource consumption levels. It's normal if cost increases correlate with usage increases. What's concerning, however, is a spike in cloud costs that you can't tie to consumption changes. It's likely in that case that you're spending more either because your cloud service provider raised its prices or your cloud environment is not optimized from a cost perspective. ... You can reduce latency (meaning the delay between when a user requests data on the network and when it arrives) on cloud platforms by choosing cloud regions that are geographically proximate to your end users. But that only works if your users are concentrated in certain areas, and if cloud data centers are available close to them. If this is not the case, you are likely to run into latency issues, which could dampen the user experience you deliver. 


The future of data center networking and processing

The optical-to-electrical conversion that is performed by the optical transceiver is still needed in a CPO system, but it moves from a pluggable module located at the faceplate of the switching equipment to a small chip (or chiplet) that is co-packaged very closely to the target ICs inside the box. Data center chipset heavyweights Broadcom and Nvidia have both announced CPO-based data center networking products operating at 51.2 and 102.4 Tb/s. ... Early generation CPO systems, such as those announced by Broadcom and Nvidia for Ethernet switching, make use of high channel count fiber array units (FAUs) that are designed to precisely align the fiber cores to their corresponding waveguides inside the PICs. These FAUs are challenging to make as they require high fiber counts, mixed single-mode (SM) and polarization maintaining (PM) fibers, integration of micro-optic components depending on the fiber-to-chip coupling mechanism, highly precise tolerance alignments, CPO-optimized fibers and multiple connector assemblies.  ... In addition to scale and cost benefits, extreme densities can be achieved at the edge of the PIC by bringing the waveguides very close together, down to about 30µm, which is far more than what can be achieved with even the thinnest fibers. Next generation fiber-to-chip coupling will enable GPU optics – which will require unprecedented levels of density and scale.


Align AI with Data, Analytics and Governance to Drive Intelligent, Adaptive Decisions and Actions Across the Organisation

Unlocking AI’s full business potential requires building executive AI literacy. They must be educated on AI opportunities, risks and costs to make effective, future-ready decisions on AI investments that accelerate organisational outcomes. Gartner recommends D&A leaders introduce experiential upskilling programs for executives, such as developing domain-specific prototypes to make AI tangible. This will lead to greater and more appropriate investment in AI capabilities. ... Using synthetic data to train AI models is now a critical strategy for enhancing privacy and generating diverse datasets. However, complexities arise from the need to ensure synthetic data accurately represents real-world scenarios, scales effectively to meet growing data demand and integrates seamlessly with existing data pipelines and systems. “To manage these risks, organisations need effective metadata management,” said Idoine. “Metadata provides the context, lineage and governance needed to track, verify and manage synthetic data responsibly, which is essential to maintaining AI accuracy and meeting compliance standards.” ... Building GenAI models in-house offers flexibility, control and long-term value that many packaged tools cannot match. As internal capabilities grow, Gartner recommends organisations adopt a clear framework for build versus buy decisions. 


Do microServices' Benefits Supersede Their caveats? A Conversation With Sam Newman

A microservice is one of those where it is independently deployable so I can make a change to it and I can roll out new versions of it without having to change any other part of my system. So things like avoiding shared databases are really about achieving that independent deployability. And it's a really simple idea that can be quite easy to implement if you know about it from the beginning. It can be difficult to implement if you're already in a tangled mess. And that idea of independent deployability has interesting benefits because the fact that something is independently deployable is obviously useful because it's low impact releases, but there's loads of other benefits that start to flow from that. ... The vast majority of people who tell me they've scaling issues often don't have them. They could solve their scaling issues with a monolith, no problem at all, and it would be a more straightforward solution. They're typically organizational scale issues. And so, for me, what the world needs from our IT's product-focused, outcome-oriented, and more autonomous teams. That's what we need, and microservices are an enabler for that. Having things like team topologies, which of course, although the DevOps topology stuff was happening around the time of my first edition of my book, that being kind of moved into the team topology space by Matthew and Manuel around the second edition again sort of helps kind of crystallize a lot of those concepts as well.


Why Businesses Must Upgrade to an AI-First Connected GRC System

Adopting a connected GRC solution enables organizations to move beyond siloed operations by bringing risk and compliance functions onto a single, integrated platform. It also creates a unified view of risks and controls across departments, bringing better workflows and encouraging collaboration. With centralized data and shared visibility, managing complex, interconnected risks becomes far more efficient and proactive. In fact, this shift toward integration reflects a broader trend that is seen in the India Regulatory Technology Business Report 2024–2029 findings, which highlight the growing adoption of compliance automation, AI, and machine learning in the Indian market. The report points to a future where GRC is driven by data, merging operations, technology, and control into a single, intelligent framework. ... An AI-first, connected GRC solution takes the heavy lifting out of compliance. Instead of juggling disconnected systems and endless updates, it brings everything together, from tracking regulations to automating actions to keeping teams aligned. For compliance teams, that means less manual work and more time to focus on what matters. ... A smart, integrated GRC solution brings everything into one place. It helps organizations run more smoothly by reducing errors and simplifying teamwork. It also means less time spent on admin and better use of people and resources where they are really needed.


The Importance of Information Sharing to Achieve Cybersecurity Resilience

Information sharing among different sectors predominantly revolves around threats related to phishing, vulnerabilities, ransomware, and data breaches. Each sector tailors its approach to cybersecurity information sharing based on regulatory and technological needs, carefully considering strategies that address specific risks and identify resolution requirements. However, for the mobile industry, information sharing relating to cyberattacks on the networks themselves and misuse of interconnection signalling are also the focus of significant sharing efforts. Industries learn from each other by adopting sector-specific frameworks and leveraging real-time data to enhance their cybersecurity posture. This includes real-time sharing of indicators of compromise (IoCs) and the techniques, tactics, and procedures (TTPs) associated with phishing campaigns. An example of this is the recently launched Stop Scams UK initiative, bringing together tech, telecoms and finance industry leaders, who are going to share real-time data on fraud indicators to enhance consumer protection and foster economic security. This is an important development, as without cross-industry information sharing, determining whether a cybersecurity attack campaign is sector-specific or indiscriminate becomes difficult. 

Daily Tech Digest - June 02, 2025


Quote for the day:

"The best way to predict the future is to create it." -- Peter Drucker


Doing nothing is still doing something

Here's the uncomfortable truth, doing nothing is still doing something – and very often, it's the wrong thing. We saw this play out at the start of the year when Donald Trump's likely return to the White House and the prospect of fresh tariffs sent ripples through global markets. Investors froze, and while the tariffs have been shelved (for now), the real damage had already been done – not to portfolios, but to behaviour. This is decision paralysis in action. And in my experience, it's most acute among entrepreneurs and high-net-worth individuals post-exit, many of whom are navigating wealth independently for the first time. It's human nature to crave certainty, especially when it comes to money, but if you're waiting for a time when everything is calm, clear, and safe before investing or making a financial decision, I've got bad news – that day is never going to arrive. Markets move, the political climate is noisy, the global economy is always in flux. If you're frozen by fear, your money isn't standing still – it's slipping backwards. ... Entrepreneurs are used to taking calculated risks, but when it comes to managing post-exit wealth or personal finances, many find themselves out of their depth. A little knowledge can be a dangerous thing – and half-understanding the tax system, the economy, or the markets can lead to costly mistakes.


The Future of Agile Isn’t ‘agile’

One reason is that agilists introduced too many conflicting and divergent approaches that fragmented the market. “Agile” meant so many things to different people that hiring managers could never predict what they were getting when a candidate’s resume indicated s/he was “experienced in agile development.” Another reason organizations failed to generate value with “agile” was that too many agile approaches focused on changing practices or culture while ignoring the larger delivery system in which the practices operate, reinforcing a culture that is resistant to change. This shouldn’t be a surprise to people following our industry, as my colleague and LeadingAgile CEO Mike Cottmeyer has been talking about why agile fails for over a decade, such as his Agile 2014 presentation, Why is Agile Failing in Large Enterprises… …and what you can do about it. The final reason that led “agile” to its current state of disfavor is that early in the agile movement there was too much money to be made in training and certifications. The industry’s focus on certifications had the effect over time of misaligning the goals of the methodology / training companies and their customers. “Train everyone. Launch trains” may be a short-term success pattern for a methodology purveyor, but it is ultimately unsustainable because the training and practices are too disconnected from tangible results senior executives need to compete and win in the market.


CIOs get serious about closing the skills gap — mainly from within

Staffing and talent issues are affecting CIOs’ ability to double down on strategic and innovation objectives, according to 54% of this year’s respondents. As a result, closing the skills gap has become a huge priority. “What’s driving it in some CIOs’ minds is tied back to their AI deployments,” says Mark Moccia, a vice president research director at Forrester. “They’re under a lot of cost pressure … to get the most out of AI deployments” to increase operational efficiencies and lower costs, he says. “It’s driving more of a need to close the skills gap and find people who have deployed AI successfully.” AI, generative AI, and cybersecurity top the list of skills gaps preventing organizations from achieving objectives, according to an April Gartner report. Nine out of 10 organizations have adopted or plan to adopt skills-based talent growth to address those challenges. ... The best approach, Karnati says, is developing talent from within. “We’re equipping our existing teams with the space, tools, and support needed to explore genAI through practical application, including rapid prototyping, internal hackathons, and proof-of-concept sprints,” Karnati says. “These aren’t just technical exercises — they’re structured opportunities for cross-functional learning, where engineers, product leads, and domain experts collaborate to test real use cases.”


The Critical Quantum Timeline: Where Are We Now And Where Are We Heading?

Technically, the term is fault-tolerant quantum computing. The qubits that quantum computers use to process data have to be kept in a delicate state – sometimes frozen to temperatures very close to absolute zero – in order to stay stable and not “decohere”. Keeping them in this state for longer periods of time requires large amounts of energy but is necessary for more complex calculations. Recent research by Google, among others, is pointing the way towards developing more robust and resilient quantum methods. ... One of the most exciting prospects ahead of us involves applying quantum computing to AI. Firstly, many AI algorithms involve solving the types of problems that quantum computers excel at, such as optimization problems. Secondly, with its ability to more accurately simulate and model the physical world, it will generate huge amounts of synthetic data. ... Looking beyond the next two decades, quantum computing will be changing the world in ways we can’t even imagine yet, just as the leap to transistors and microchips enabled the digital world and the internet of today. It will tackle currently impossible problems, help us create fantastic new materials with amazing properties and medicines that affect our bodies in new ways, and help us tackle huge problems like climate change and cleaning the oceans.


6 hard truths security pros must learn to live with

Every technological leap will be used against you - Information technology is a discipline built largely on rapid advances. Some of these technological leaps can help improve your ability to secure the enterprise. But every last one of them brings new challenges from a security perspective, not the least of which is how they will be used to attack your systems, networks, and data. ... No matter how good you are, your organization will be victimized - This is a hard one to swallow, but if we take the “five stages of grief” approach to cybersecurity, it’s better to reach the “acceptance” level than to remain in denial because much of what happens is simply out of your control. A global survey of 1,309 IT and security professionals found that 79% of organizations suffered a cyberattack within the past 12 months, up from 68% just a year ago, according to cybersecurity vendor Netwrix’s Hybrid Security Trends Report. ... Breach blame will fall on you — and the fallout could include personal liability - As if getting victimized by a security breach isn’t enough, new Securities and Exchange Commission (SEC) rules put CISOs in the crosshairs for potential criminal prosecution. The new rules, which went into effect in 2023, require publicly listed companies to report any material cybersecurity incident within four business days.


Are you an A(I)ction man?

Whilst individually AI-generated action figures have a small impact - a drop in the ocean you could say - trends like this exemplify how easy it is to use AI en masse, and collectively create an ocean of demand. Seeing the number of individuals, even those with knowledge of AI’s lofty resource consumption, partaking in the creation of these avatars, makes me wonder if we need greater awareness of the collective impact of GenAI. Now, I want to take a moment to clarify this is not a criticism of those producing AI-generated content, or of anyone who has taken part in the ‘action figure’ trend. I’ve certainly had many goes with DALL-E for fun, and taken part in various trends in my time, but the volume of these recent images caught my attention. Many of the conversations I had at Connect New York a few weeks ago addressed sustainability and the need for industry collaboration, but perhaps we should also be instilling more awareness from an end-user point of view. After all, ChatGPT, according to the Washington Post, consumes 39.8 million kWh per day. I’d be fascinated to see the full picture of power and water consumption from the AI-generated action figures. Whilst it will only account for a tiny fraction of overall demand, these drops can have a tendency to accumulate. 


The MVP Dilemma: Scale Now or Scale Later?

Teams often have few concrete requirements about scalability. The business may not be a reliable source of information but, as we noted above, they do have a business case that has implicit scalability needs. It’s easy for teams to focus on functional needs, early on, and ignore these implicit scaling requirements. They may hope that scaling won’t be a problem or that they can solve the problem by throwing more computing resources at it. They have a legitimate concern about overbuilding and increasing costs, but hoping that scaling problems won't happen is not a good scaling strategy. Teams need to consider scaling from the start. ... The MVP often has implicit scalability requirements, such as "in order for this idea to be successful we need to recruit ten thousand new customers". Asking the right questions and engaging in collaborative dialogue can often uncover these. Often these relate to success criteria for the MVP experiment. ... Some people see asynchronous communication as another scaling panacea because it allows work to proceed independently of the task that initiated the work. The theory is that the main task can do other things while work is happening in the background. So long as the initiating task does not, at some point, need the results of the asynchronous task to proceed, asynchronous processing can help a system to scale. 


Data Integrity: What It Is and Why It Matters

By contrast, data quality builds on methods for confirming the integrity of the data and also considers the data’s uniqueness, timeliness, accuracy, and consistency. Data is considered “high quality” when it ranks high in all these areas based on the assessment of data analysts. High-quality data is considered trustworthy and reliable for its intended applications based on the organization’s data validation rules. The benefits of data integrity and data quality are distinct, despite some overlap. Data integrity allows a business to recover quickly and completely in the event of a system failure, prevent unauthorized access to or modification of the data, and support the company’s compliance efforts. By confirming the quality of their data, businesses improve the efficiency of their data operations, increase the value of their data, and enhance collaboration and decision-making. Data Quality efforts also help companies reduce their costs, enhance employee productivity, and establish closer relationships with their customers. Implementing a data integrity strategy begins by identifying the sources of potential data corruption in your organization. These include human error, system malfunctions, unauthorized access, failure to validate and test, and lack of Governance. A data integrity plan operates at both the database level and business level.


Backup-as-a-service explained: Your guide to cloud data protection

With BaaS, enterprises have quick, easy access to their data. Providers store multiple copies of backups in different locations so that data can be recovered when lost due to outages, failures or accidental deletion. BaaS also features geographic distribution and automatic failover, when data handling is automatically moved to a different server or system in the event of an incident to ensure that it is safe and readily available. ... With BaaS, the provider uses its own cloud infrastructure and expertise to handle the entire backup and restoration process. Enterprises simply connect to the backup engine, set their preferences and the platform handles file transfer, encryption and maintenance. Automation is the engine that drives BaaS, helping ensure that data is continuously backed up without slowing down network performance or interrupting day-to-day work. Enterprises first select the data they need backed up — whether it be simple files or complex apps — backup frequency and data retention times. ... Enterprises shouldn’t just jump right into BaaS — proper preparation is critical. Firstly, it is important to define a backup policy that identifies the organization’s critical data that must be backed up. This policy should also include backup frequency, storage location and how long copies should be retained.


CISO 3.0: Leading AI governance and security in the boardroom

AI is expanding the CISO’s required skillset beyond cybersecurity to include fluency in data science, machine learning fundamentals, and understanding how to evaluate AI models – not just technically, but from a governance and risk perspective. Understanding how AI works and how to use it responsibly is essential. Fortunately, AI has also evolved how we train our teams. For example, adaptive learning platforms that personalize content and simulate real-world scenarios are assisting in closing the skills gap more effectively. Ultimately, to become successful in the AI space, both CISOs and their teams will need to grasp how AI models are trained, the data they rely on, and the risks they may introduce. CISOs should always prioritize accountability and transparency. Red flags to look out for include a lack of explainability or insufficient auditing capabilities, both of which leave companies vulnerable. It’s important to understand how it handles sensitive data, and whether it has proven success in similar environments. Beyond that, it’s also vital to evaluate how well the tool aligns with your governance model, that it can be audited, and that it integrates well into your existing systems. Lastly, overpromising capabilities or providing an unclear roadmap for support are signs to proceed with caution.

Daily Tech Digest - May 12, 2025


Quote for the day:

"Our greatest fear should not be of failure but of succeeding at things in life that don't really matter." -- Francis Chan



The rise of vCISO as a viable cybersecurity career path

Companies that don’t have the means to hire a full-time CISO still face the same harsh realities their peers do — heightened compliance demands, escalating cyber incidents, and growing tech-related risks. A part-time security leader can help them assess their state of security and build out a program from scratch, or assist a full-time director-level security leader with a project. ... In some of these ongoing relationships this could be to fill the proverbial chair of the CISO, doing all the traditional work of the role on a part-time basis. This is the kind of arrangement most likely to be referred to as a fractional role. Other retainer arrangements may just be for an advisory position where the client is buying regular mindshare of the vCISO to supplement their tech team’s knowledge pool. They could be a strategic sounding board to the CIO or even a subject-matter expert to the director of security or newly installed CISO. But vCISOs can work on a project-by-project or hourly basis as well. “It’s really what works best for my potential client,” says Demoranville. “I don’t want to force them into a box. So, if a subscription model works or a retainer, cool. If they only want me here for a short engagement, maybe we’re trying to put in a compliance regimen for ISO 27001 or you need me to review NIST, that’s great too.”


Why Indian Banks Need a Sovereign Cloud Strategy

Enterprises need to not only implement better compliance strategies but also rethink the entire IT operating model. Managed sovereign cloud services can help enterprises address this need. ... The need for true sovereignty becomes crucial in a world where many global cloud providers, even when operating within Indian data centers, are subject to foreign laws such as the U.S. Clarifying Lawful Overseas Use of Data Act or the Foreign Intelligence Surveillance Act. These regulations can compel disclosure of Indian banking data to overseas governments, undermining trust and violating the spirit of data localization mandates. "When an Indian bank chooses a global cloud provider with U.S. exposure, they're essentially opening a backdoor for foreign jurisdictions to access sensitive Indian financial data," Rajgopal said. "Sovereignty is a strategic necessity." Managed sovereign clouds not only align with India's compliance frameworks but also reduce complexity by integrating regulatory controls directly into the cloud stack. Instead of treating compliance as an afterthought, it is incorporated in the architecture. ... "Banks today are not just managing money; they are managing trust, security and compliance at unprecedented levels. Sovereign cloud is no longer optional. It's the future of financial resilience," said Pai.


Study Suggests Quantum Entanglement May Rewrite the Rules of Gravity

Entanglement entropy measures the degree of quantum correlation between different regions of space and plays a key role in quantum information theory and quantum computing. Because entanglement captures how information is shared across spatial boundaries, it provides a natural bridge between quantum theory and the geometric fabric of spacetime. In conventional general relativity, the curvature of spacetime is determined by the energy and momentum of matter and radiation. The new framework adds another driver: the quantum information shared between fields. This extra term modifies Einstein’s equations and offers an explanation for some of gravity’s more elusive behaviors, including potential corrections to Newton’s gravitational constant. ... One of the more striking implications involves black hole thermodynamics. Traditional equations for black hole entropy and temperature rely on Newton’s constant being fixed. If gravity “runs” with energy scale — as the study proposes — then these thermodynamic quantities also shift. ... Ultimately, the study does not claim to resolve quantum gravity, but it does reframe the problem. By showing how entanglement entropy can be mathematically folded into Einstein’s equations, it opens a promising path that links spacetime to information — a concept familiar to quantum computer scientists and physicists alike.


Maximising business impact: Developing mission-critical skills for organisational success

Often, L&D is perceived merely as an HR-led function tasked with building workforce capabilities. However, this narrow framing extensively limits its potential impact. As Cathlea shared, “It’s time to educate leaders that L&D is not just a support role—it’s a business-critical responsibility that must be shared across the organisation. By understanding what success looks like through the eyes of different functions, L&D teams can design programmes that support those ambitions — and crucially, communicate value in language that business leaders understand. The panel referenced a case from a tech retailer with over 150,000 employees, where the central L&D team worked to identify cross-cutting capability needs, such as communication, project management, and leadership, while empowering local departments to shape their training solutions. This balance of central coordination and local autonomy enabled the organisation to scale learning in a way that was both relevant and impactful. ... The shift towards skill-based development is also transforming how learning experiences are designed and delivered. What matters most is whether these learning moments are recognised, supported, and meaningfully connected to broader organisational goals.


What software developers need to know about cybersecurity

Training developers to write secure code shouldn’t be looked at as a one-time assignment. It requires a cultural shift. Start by making secure coding techniques are the standard practice across your team. Two of the most critical (yet frequently overlooked) practices are input validation and input sanitization. Input validation ensures incoming data is appropriate and safe for its intended use, reducing the risk of logic errors and downstream failures. Input sanitization removes or neutralizes potentially malicious content—like script injections—to prevent exploits like cross-site scripting (XSS). ... Authentication and authorization aren’t just security check boxes—they define who can access what and how. This includes access to code bases, development tools, libraries, APIs, and other assets. ... APIs may be less visible, but they form the connective tissue of modern applications. APIs are now a primary attack vector, with API attacks growing 1,025% in 2024 alone. The top security risks? Broken authentication, broken authorization, and lax access controls. Make sure security is baked into API design from the start, not bolted on later. ... Application logging and monitoring are essential for detecting threats, ensuring compliance, and responding promptly to security incidents and policy violations. Logging is more than a check-the-box activity—for developers, logging can be a critical line of defense.


Why security teams cannot rely solely on AI guardrails

The core issue is that most guardrails are implemented as standalone NLP classifiers—often lightweight models fine-tuned on curated datasets—while the LLMs they are meant to protect are trained on far broader, more diverse corpora. This leads to misalignment between what the guardrail flags and how the LLM interprets inputs. Our findings show that prompts obfuscated with Unicode, emojis, or adversarial perturbations can bypass the classifier, yet still be parsed and executed as intended by the LLM. This is particularly problematic when guardrails fail silently, allowing semantically intact adversarial inputs through. Even emerging LLM-based judges, while promising, are subject to similar limitations. Unless explicitly trained to detect adversarial manipulations and evaluated across a representative threat landscape, they can inherit the same blind spots. To address this, security teams should move beyond static classification and implement dynamic, feedback-based defenses. Guardrails should be tested in-system with the actual LLM and application interface in place. Runtime monitoring of both inputs and outputs is critical to detect behavioral deviations and emergent attack patterns. Additionally, incorporating adversarial training and continual red teaming into the development cycle helps expose and patch weaknesses before deployment. 


Finding the Right Architecture for AI-Powered ESG Analysis

Rather than choosing between competing approaches, we developed a hybrid architecture that leverages the strengths of both deterministic workflows and agentic AI: For report analysis: We implemented a structured workflow that removes the Intent Agent and Supervisor from the process, instead providing our own intention through a report workflow. This orchestrates the process using the uploaded sustainability file, synchronously chaining prompts and agents to obtain the company name and relevant materiality topics, then asynchronously producing a comprehensive analysis of environmental, social, and governance aspects. For interactive exploration: We maintained the conversational, agentic architecture as a core component of the solution. After reviewing the initial structured report, analysts can ask follow-up questions like, “How does this company’s emissions reduction claims compare to their industry peers?” ... By marrying these approaches, enterprise architects can build systems that maintain human oversight while leveraging AI to handle data-intensive tasks – keeping human analysts firmly in the driver’s seat with AI serving as powerful analytical tools rather than autonomous decision-makers. As we navigate the rapidly evolving landscape of AI implementation, this balanced approach offers a valuable pathway forward.


The Rise of xLMs: Why One-Size-Fits-All AI Models Are Fading

To reach its next evolution, the LLM market will follow all other widely implemented technologies and fragment into an “xLM” market of more specialized models, where the x stands for various models. Language models are being implemented in more places with application- and use case-specific demands, such as lower power or higher security and safety measures. Size is another factor, but we’ll also see varying functionality and models that are portable, remote, hybrid, and domain and region-specific. With this progression, greater versatility and diversity of use cases will emerge, with more options for pricing, security, and latency. ... We must rethink how AI models are trained to fully prepare for and embrace the xLM market. The future of more innovative AI models and the pursuit of artificial general intelligence hinge on advanced reasoning capabilities, but this necessitates restructuring data management practices. ... Preparing real-time data pipelines for the xLM age inherently increases pressure on data engineering resources, especially for organizations currently relying on static batch data uploads and fine-tuning. Historically, real-time accuracy has demanded specialized teams to complete regular batch uploads while maintaining data accuracy, which presents cost and resource barriers. 


Ernst & Young exec details the good, bad and future of genAI deployments

“There is a huge skills gap in data science in terms of the number of people that can do that well, and that is not changing. Everywhere else we can talk about what jobs are changing and where the future is. But AI scientists, data scientists, continue to be the top two in terms of what we’re looking for. I do think organizations are moving to partner more in terms of trying to leverage those skills gap….” The more specific the case for the use of AI, the more easily you can calculate the ROI. “Healthcare is going to be ripe for it. I’ve talked to a number of doctors who are leveraging the power of AI and just doing their documentation requirements, using it in patient booking systems, workflow management tools, supply chain analysis. There, there are clear productivity gains, and they will be different per sector. “Are we also far enough along to see productivity gains in R&D and pharmaceuticals? Yes, we are. Is it the Holy Grail? Not yet, but we are seeing gains and that’s where I think it gets more interesting. “Are we far enough along to have systems completely automated and we just work with AI and ask the little fancy box in front of us to print out the balance sheet and everything’s good? No, we’re a hell of a long way away from that.


How Human-Machine Partnerships Are Evolving in 2025

“Soon, there will be no function that does not have AI as a fundamental ingredient. While it’s true that AI will replace some jobs, it will also create new ones and reduce the barrier of entry into many markets that have traditionally been closed to just a technical or specialized group,” says Bukhari. “AI becoming a part of day-to-day life will also force us to embrace our humanity more than ever before, as the soft skills AI can’t replace will become even more critical for success in the workplace and beyond.” ... CIOs and other executives must be data and AI literate, so they are better equipped to navigate complex regulations, lead teams through AI-driven transformations and ensure that AI implementations are aligned with business goals and values. Cross-functional collaboration is also critical. ... AI innovation is already outpacing organizational readiness, so continuous learning, proactive strategy alignment and iterative implementation approaches are important. CIOs must balance infrastructure investments, like GPU resource allocation, with flexibility in computing strategies to stay competitive without compromising financial stability. “As the enterprise landscape increasingly incorporates AI-driven processes, the C-suite must cultivate specific skills that will cascade effectively through their management structures and their entire human workforce,” says Miskawi. 


Daily Tech Digest - March 21, 2025


Quote for the day:

"A leader is one who knows the way, goes the way, and shows the way." -- John C. Maxwell



Synthetic data and the risk of ‘model collapse’

There is a danger of an ‘ouroboros’ here, or a snake eating its own tail. Models can be ‘poisoned’ with data that is passed on in addition to malicious prompts. While usually caused by sabotage, this can also be unintentional: AI models sometimes hallucinate, including when they are generating data for their LLM descendant. With enough ongoing errors, a new LLM risks performing worse than its predecessors. At its core, it’s a simple case of garbage in, garbage out. The logical end state is a total ‘model collapse‘, where drivel overtakes anything factual and makes an LLM dysfunctional. Should this happen (and it may have happened with GPT-4.5), AI model makers are forced to pull back to an earlier checkpoint, reassess their data or be forced to make architectural changes. ... In short, a high degree of expertise is required for each step in the AI process. Currently, attention is focused on the initial building of the foundation models on the one hand and the actual implementation of GenAI on the other. The importance of training data was touched upon in 2023 because online organizations regularly felt robbed. In essence: it made headlines, which is why we all became aware of the intricacies of training data. Now that the flow of online retrievable data is ending, AI players are grasping for an alternative that is creating new problems.


Automated Workflow Perfection Is a Job in Itself

“The fragmented nature of automation – spanning robotic process automation, business process management, workflow tools and AI-powered solutions all further complicates consistent measurement,” lamented Gaudette. “Market segment overlap presents another challenge. As technologies increasingly converge, traditional category boundaries blur. A document processing solution might be classified under workflow automation by one analyst and digital process automation by another, creating inconsistent market size calculations.” Other survey “findings” from Custom Workflows’ analysis report suggest that the integration of artificial intelligence with traditional automation represents a particularly powerful growth catalyst. McKinsey’s own analysis reveals that while basic automation delivers 20-30% cost reductions, intelligent automation incorporating AI can achieve 50-70% savings while simultaneously improving quality and customer experience. ... As the market for workflow automation now goes into what we might call an amplified state of flux, it appears that current automation adoption follows a classic bell curve distribution, with most organizations clustered in the middle stages of implementation maturity. Surprisingly, smaller organizations often outperform their larger counterparts when it comes to automation success. 


The hidden risk in SaaS: Why companies need a digital identity exit strategy

To reduce dependency on external SaaS providers, organizations should consider taking back control of their digital identity infrastructure. This doesn’t mean abandoning cloud services altogether, but rather strategically deploying identity management solutions that provide ownership and portability. Self-hosted identity solutions running on private cloud or on-premises environments can offer greater control. Businesses should also consider multi-cloud identity architectures allowing authentication and access control to function across different cloud providers.  ... Organizations must closely monitor data sovereignty laws and adjust their infrastructure accordingly. Ensuring that identity solutions comply with shifting regulations will help avoid legal and operational risks. To avoid being caught off guard, it’s important for IT teams to understand what’s going on behind the scenes rather than entirely outsourcing their infrastructure. For the highest level of preparedness, organizations can manage identity infrastructure systems themselves, reducing reliance on third party SaaS companies for critical functions. If teams understand the inner workings of their identity management, they will be better placed to develop an emergency response plan with predefined steps to transition services in case of sudden geopolitical changes.


Why Your Business Needs an AI Innovation Unit

An AI innovation unit should always support sustainable and strategic organizational growth through the ethical and impactful application and integration of AI, McDonagh-Smith says. "Achieving this mission involves identifying and deploying AI technologies to solve complex and simple business problems, improving efficiency, cultivating innovation, and creating measurable new organizational value." A successful unit, McDonagh-Smith states, prioritizes aligning AI initiatives with the enterprise's long-term vision, ensuring transparency, fairness, and accountability in its AI applications. ... An AI innovation unit leader is foremost a business leader and visionary, responsible for helping the enterprise embrace and effectively use AI in an ethical and responsible manner, Hall says. "The leader needs to understand the risk and concerns, but also AI governance and frameworks." He adds that the leader should also be realistic and inspiring, with an understanding of the hype curve and the technology's potential. ... An AI innovation unit requires a collaborative culture that bridges silos within the organization and commits to continuous reflection and learning, McDonagh-Smith says. "The unit needs to establish practical partnerships with academic institutions, tech startups, and AI thought leadership groups to create flows of innovation, intelligence, and business insights."


How to avoid the AI complexity trap

When done right, AI enables simplicity, cutting across layers of complexity -- but with limits. "AI is not a silver bullet," said Richard Demeny, a software development consultant, formerly with Arm. "LLMs under the hood actually use probabilities, not understanding, to give answers. It's humans who design, build, and implement systems, and while AI may automate some entry-level roles and certainly bring significant productivity gains, it cannot replace the amount of practical experience IT decision-makers need to make the right trade-offs." ... To keep both AI and IT complexity at bay, "deployment of AI needs to be thoughtful," said Hashim. "Focus on the simplicity of user experience, quality of AI, and its ability to get things done," she said. "Uplevel all your employees with AI so that your organization as a whole can be more productive and happy." Consistency is the key to managing complexity, Howard said. Platforms, for example, "make things consistent. So you're able to do things -- sometimes very complicated things -- in consistent ways and standard ways that everybody knows how to use them. Even something as simple as definitions or taxonomy. If everybody is speaking the same language, so a simplified taxonomy, then it's much easier to communicate."  


Outsmart the skills gap crisis and build a team without recruitment

Team augmentation involves engaging external software engineers from a partner company to complement an existing in-house team. This approach provides companies with the flexibility to quickly scale their technical resources up or down, depending on the project’s needs, and plug any capability gaps inside their teams. It can be crucial to the success of businesses whose product is software, or relies on software, as it enables businesses to scale their team and projects flexibly without the risks involved with growing an in-house team. ... It allows companies to access a diverse range of skills and expertise that may not be available in-house. Companies can quickly ramp up their technical resources and tackle projects that require specialised skills or knowledge whilst onboarding engineers that can bring fresh ideas and perspectives to the project. Having access to this expertise quickly is often of paramount importance as companies compete to grow. For instance, if a company needs to design, develop, and support a mobile app, but its in-house team lacks the necessary skills and experience, it can quickly engage a team of engineers who specialise in mobile app development to work on the project. This approach can help companies save time and resources and ensure that their projects are completed on time and to a high standard.


Taking AI Commoditization Seriously

Commoditization is the process of products or services becoming “standardized, marketable objects.” Any given unit of a commodity, from corn to crude oil, is generally interchangeable with and sells for the same price as others. Commoditization of frontier models could emerge in a few ways. Perhaps, as Yann LeCun predicts, open-source models could equal or surpass closed-source performance. Or perhaps competing firms continue finding ways to match each other’s developments. Such competition has more above-board variants—top-tier engineers at different firms keeping pace with each other—and less. Consider, for instance, OpenAI’s allegations against DeepSeek of inappropriate copying. ... The emergence of new, decentralized AI threat vectors could offer the powers that be a common enemy. This might present a unique opportunity for US-China collaboration. Modern US-China collaboration has required tangible mutual interest to succeed. The most famous modern US-China agreement, the Nixon/Kissinger-Mao/Zhou normalization of US-China relations, occurred in large part to overcome a perceived common threat in the USSR. When few companies control cutting-edge frontier models, preventing third-party model misuse is comparatively simple. Fewer frontier developers imply fewer sites to monitor for malicious actors. 


Making Architecturally Significant Decisions

Architectural decisions are at the root of our practice but they are often hard to spot. The vast majority of decisions get processed at the team level and do not apply architectural thinking or have an architect involved at all. This approach can be a benefit in agile organizations if managed and communicated effectively. ... Envision an enterprise or company, then imagine all the teams in the organization working in parallel on changes, remember to add in maintenance teams and operations teams doing ‘keep the lights running’ work. ... To effectively manage decisions, the architecture team should put in place a decision management process early in its lifecycle, by making critical investments into how the organization is going to process decision point in the architecture engagement model. During the engagement methodology update and the engagement principles definition, the team will decide what levels of decisions must be exposed in the repository and their limits in duration, quality and effort. These principles will guide the decision methods for the entire team until the next methodology update. There are numerous decision methods and theories in the marketplace in making better decisions. The goal of the architecture decision repository is to ensure that decisions are made clearly, with appropriate tools and with respect for traceability.


What is predictive analytics? Transforming data into future insights

Predictive analytics draws its power from many methods and technologies, including big data, data mining, statistical modeling, ML, and assorted mathematical processes. Organizations use predictive analytics to sift through current and historical data to detect trends, and forecast events and conditions that should occur at a specific time, based on supplied parameters. With predictive analytics, organizations can find and exploit patterns contained within data in order to detect risks and opportunities. Models can be designed, for instance, to discover relationships between various behavior factors. Such models enable the assessment of either the promise or risk presented by a particular set of conditions, guiding informed decision making across various categories of supply chain and procurement events. ... Predictive analytics makes looking into the future more accurate and reliable than previous tools. As such it can help adopters find ways to save and earn money. Retailers often use predictive models to forecast inventory requirements, manage shipping schedules, and configure store layouts to maximize sales. Airlines frequently use predictive analytics to set ticket prices reflecting past travel trends. 


C-Suite Leaders Must Rewire Businesses for True AI Value

AI's true value doesn't come from incremental gains but emerges when workflows are transformed completely. McKinsey found 21% of companies using gen AI have redesigned workflows and seen significant effect on their bottom-line. Morgan Stanley redesigned client interactions by integrating AI-powered assistants. Rather than just automating document retrieval, the company embedded AI into workflows, enabling advisers to generate customized reports and insights in real time. This improved efficiency and enhanced customer experience through more data-driven, personalized interactions. Boston Consulting Group highlighted that companies embedding AI into core business workflows report 40% higher process efficiency and 25% faster output. For CIOs and AI leaders, this highlights a crucial point. Deploying AI without rethinking workflows resembles putting a turbo engine in a low-end car. The real competitive advantage comes from integrating AI into the fabric of business operations and not in standalone tasks. ... AI is becoming a core function that enhances decision-making, automates tasks and drives innovation. McKinsey's report emphasized that AI's biggest value lies in large-scale transformation, not isolated use cases. 

Daily Tech Digest - February 04, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Technology skills gap plagues industries, and upskilling is a moving target

“The deepening threat landscape and rapidly evolving high-momentum technologies like AI are forcing organizations to move with lightning speed to fill specific gaps in their job architectures, and too often they are stumbling,” said David Foote, chief analyst at consultancy Foote Partners. To keep up with the rapidly changing landscape, Gartner suggests that organizations invest in agile learning for tech teams. “In the context of today’s AI-fueled accelerated disruption, many business leaders feel learning is too slow to respond to the volume, variety and velocity of skills needs,” said Chantal Steen, a senior director in Gartner’s HR practice. “Learning and development must become more agile to respond to changes faster and deliver learning more rapidly and more cost effectively.” Studies from staffing firm ManpowerGroup, hiring platform Indeed, and Deloitte consulting show that tech hiring will focus on candidates with flexible skills to meet evolving demands. “Employers know a skilled and adaptable workforce is key to navigating transformation, and many are prioritizing hiring and retaining people with in-demand flexible skills that can flex to where demand sits,” said Jonas Prising, ManpowerGroup chair and CEO.


Mixture of Experts (MoE) Architecture: A Deep Dive & Comparison of Top Open-Source Offerings

The application of MoE to open-source LLMs offers several key advantages. Firstly, it enables the creation of more powerful and sophisticated models without incurring the prohibitive costs associated with training and deploying massive, single-model architectures. Secondly, MoE facilitates the development of more specialized and efficient LLMs, tailored to specific tasks and domains. This specialization can lead to significant improvements in performance, accuracy, and efficiency across a wide range of applications, from natural language translation and code generation to personalized education and healthcare. The open-source nature of MoE-based LLMs promotes collaboration and innovation within the AI community. By making these models accessible to researchers, developers, and businesses, MoE fosters a vibrant ecosystem of experimentation, customization, and shared learning. ... Integrating MoE architecture into open-source LLMs represents a significant step forward in the evolution of artificial intelligence. By combining the power of specialization with the benefits of open-source collaboration, MoE unlocks new possibilities for creating more efficient, powerful, and accessible AI models that can revolutionize various aspects of our lives.


The DeepSeek Disruption and What It Means for CIOs

The emergence of DeepSeek has also revived a long-standing debate about open-source AI versus proprietary AI. Open-source AI is not a silver bullet. CIOs need to address critical risks as open-source AI models, if not secured properly, can be exposed to grave cyberthreats and adversarial attacks. While DeepSeek currently shows extraordinary efficiency, it requires an internal infrastructure, unlike GPT-4, which can seamlessly scale on OpenAI's cloud. Open-source AI models lack support and skills, thereby mandating users to build their own expertise, which could be demanding. "What happened with DeepSeek is actually super bullish. I look at this transition as an opportunity rather than a threat," said Steve Cohen, founder of Point72. ... The regulatory non-compliance adds another challenge as many governments restrict and disallow sensitive enterprise data from being processed by Chinese technologies. A possibility of potential backdoor can't be ruled out and this could open the enterprises to additional risks. CIOs need to conduct extensive security audits before deploying DeepSeek. rganizations can implement safeguards such as on-premises deployment to avoid data exposure. Integrating strict encryption protocols can help the AI interactions remain confidential, and performing rigorous security audits ensure the model's safety before deploying it into business workflows.


Why GreenOps will succeed where FinOps is failing

The cost-control focus fails to engage architects and engineers in rethinking how systems are designed, built and operated for greater efficiency. This lack of engagement results in inertia and minimal progress. For example, the database team we worked with in an organization new to the cloud launched all the AWS RDS database servers from dev through production, incurring a $600K a month cloud bill nine months before the scheduled production launch. The overburdened team was not thinking about optimizing costs, but rather optimizing their own time and getting out of the way of the migration team as quickly as possible. ... GreenOps — formed by merging FinOps, sustainability and DevOps — addresses the limitations of FinOps while integrating sustainability as a core principle. Green computing contributes to GreenOps by emphasizing energy-efficient design, resource optimization and the use of sustainable technologies and platforms. This foundational focus ensures that every system built under GreenOps principles is not only cost-effective but also minimizes its environmental footprint, aligning technological innovation with ecological responsibility. Moreover, we’ve found that providing emissions feedback to architects and engineers is a bigger motivator than cost to inspire them to design more efficient systems and build automation to shut down underutilized resources.


Best Practices for API Rate Limits and Quotas

Unlike short-term rate limits, the goal of quotas is to enforce business terms such as monetizing your APIs and protecting your business from high-cost overruns by customers. They measure customer utilization of your API over longer durations, such as per hour, per day, or per month. Quotas are not designed to prevent a spike from overwhelming your API. Rather, quotas regulate your API’s resources by ensuring a customer stays within their agreed contract terms. ... Even a protection mechanism like rate limiting could have errors. For example, a bad network connection with Redis could cause reading rate limit counters to fail. In such scenarios, it’s important not to artificially reject all requests or lock out users even though your Redis cluster is inaccessible. Your rate-limiting implementation should fail open rather than fail closed, meaning all requests are allowed even though the rate limit implementation is faulting. This also means rate limiting is not a workaround to poor capacity planning, as you should still have sufficient capacity to handle these requests or even design your system to scale accordingly to handle a large influx of new requests. This can be done through auto-scale, timeouts, and automatic trips that enable your API to still function.


Protecting Ultra-Sensitive Health Data: The Challenges

Protecting ultra-sensitive information "is an incredibly confusing and complicated and evolving part of the law," said regulatory attorney Kirk Nahra of the law firm WilmerHale. "HIPAA generally does not distinguish between categories of health information," he said. "There are exceptions - including the recent Dobbs rule - but these are not fundamental in their application, he said. Privacy protections related to abortion procedures are perhaps the most hotly debated type of patient information. For instance, last June - in response to the June 2022 Supreme Court's Dobbs ruling, which overturned the national right to abortion - the Biden administration's U.S. Department of Health and Human Services modified the HIPAA Privacy Rule to add additional safeguards for the access, use and disclosure of reproductive health information. The rule is aimed at protecting women from the use or disclosure of their reproductive health information when it is sought to investigate or impose liability on individuals, healthcare providers or others who seek, obtain, provide or facilitate reproductive healthcare that is lawful under the circumstances in which such healthcare is provided. But that rule is being challenged in federal court by 15 state attorneys general seeking to revoke the regulations.


Evolving threat landscape, rethinking cyber defense, and AI: Opportunties and risk

Businesses are firmly in attackers’ crosshairs. Financially motivated cybercriminals conduct ransomware attacks with record-breaking ransoms being paid by companies seeking to avoid business interruption. Others, including nation-state hackers, infiltrate companies to steal intellectual property and trade secrets to gain commercial advantage over competitors. Further, we regularly see critical infrastructure being targeted by nation-state cyberattacks designed to act as sleeper cells that can be activated in times of heightened tension. Companies are on the back foot. ... As zero trust disrupts obsolete firewall and VPN-based security, legacy vendors are deploying firewalls and VPNs as virtual machines in the cloud and calling it zero trust architecture. This is akin to DVD hardware vendors deploying DVD players in a data center and calling it Netflix! It gives a false sense of security to customers. Organizations need to make sure they are really embracing zero trust architecture, which treats everyone as untrusted and ensures users connect to specific applications or services, rather than a corporate network. ... Unfortunately, the business world’s harnessing of AI for cyber defense has been slow compared to the speed of threat actors harnessing it for attacks. 


Six essential tactics data centers can follow to achieve more sustainable operations

By adjusting energy consumption based on real-time demand, data centers can significantly enhance their operational efficiency. For example, during periods of low activity, power can be conserved by reducing energy use, thus minimizing waste without compromising performance. This includes dynamic power management technologies in switch and router systems, such as shutting down unused line cards or ports and controlling fan speeds to optimize energy use based on current needs. Conversely, during peak demand, operations can be scaled up to meet increased requirements, ensuring consistent and reliable service levels. Doing so not only reduces unnecessary energy expenditure, but also contributes to sustainability efforts by lowering the environmental impact associated with energy-intensive operations. ... Heat generated from data center operations can be captured and repurposed to provide heating for nearby facilities and homes, transforming waste into a valuable resource. This approach promotes a circular energy model, where excess heat is redirected instead of discarded, reducing the environmental impact. Integrating data centers into local energy systems enhances sustainability and offers tangible benefits to surrounding areas and communities whilst addressing broader energy efficiency goals.


The Engineer’s Guide to Controlling Configuration Drift

“Preventing configuration drift is the bedrock for scalable, resilient infrastructure,” comments Mayank Bhola, CTO of LambdaTest, a cloud-based testing platform that provides instant infrastructure. “At scale, even small inconsistencies can snowball into major operational inefficiencies. We encountered these challenges [user-facing impact] as our infrastructure scaled to meet growing demands. Tackling this challenge head-on is not just about maintaining order; it’s about ensuring the very foundation of your technology is reliable. And so, by treating infrastructure as code and automating compliance, we at LambdaTest ensure every server, service, and setting aligns with our growth objectives, no matter how fast we scale. Adopting drift detection and remediation strategies is imperative for maintaining a resilient infrastructure. ... The policies you set at the infrastructure level, such as those for SSH access, add another layer of security to your infrastructure. Ansible allows you to define policies like removing root access, changing the default SSH port, and setting user command permissions. “It’s easy to see who has access and what they can execute,” Kampa remarks. “This ensures resilient infrastructure, keeping things secure and allowing you to track who did what if something goes wrong.”


Strategies for mitigating bias in AI models

The need to address bias in AI models stems from the fundamental principle of fairness. AI systems should treat all individuals equitably, regardless of their background. However, if the training data reflects existing societal biases, the model will likely reproduce and even exaggerate those biases in its outputs. For instance, if a facial recognition system is primarily trained on images of one demographic, it may exhibit lower accuracy rates for other groups, potentially leading to discriminatory outcomes. Similarly, a natural language processing model trained on predominantly Western text may struggle to understand or accurately represent nuances in other languages and cultures. ... Incorporating contextual data is essential for AI systems to provide relevant and culturally appropriate responses. Beyond basic language representation, models should be trained on datasets that capture the history, geography, and social issues of the populations they serve. For instance, an AI system designed for India should include data on local traditions, historical events, legal frameworks, and social challenges specific to the region. This ensures that AI-generated responses are not only accurate but also culturally sensitive and context-aware. Additionally, incorporating diverse media formats such as text, images, and audio from multiple sources enhances the model’s ability to recognise and adapt to varying communication styles.