Showing posts with label best practices. Show all posts
Showing posts with label best practices. Show all posts

Daily Tech Digest - November 09, 2025


Quote for the day:

"The only way to achieve the impossible is to believe it is possible." -- Charles Kingsleigh



Way too complex: why modern tech stacks need observability

Recent outages have demonstrated that a heavy dependence on digital systems can leading to cascading faults that can halt financial transactions, disrupt public transportation and even bring airport operations to a standstill. ... To operate with confidence, businesses must see across their entire digital supply chain, which is not possible with basic monitoring. Unlike traditional monitoring, which often focuses on siloed metrics or alerts, observability provides a unified, real-time view across the entire technology stack, enabling faster, data-driven decisions at scale. Implementing real-time, AI-powered observability covers every component from infrastructure and services to applications and user experience. ... Observability also enables organizations to proactively detect anomalies before they escalate into outages, quickly pinpoint root causes across complex, distributed systems and automate response actions to reduce mean time to resolution (MTTR). The result is faster, smarter and more resilient operations, giving teams the confidence to innovate without compromising system stability, a critical advantage in a world where digital resilience and speed must go hand in hand. Resilient systems must absorb shocks without breaking. This requires both cultural and technical investment, from embracing shared accountability across teams to adopting modern deployment strategies like canary releases, blue/green rollouts and feature flagging. 


Radical Empowerment From Your Leadership: Understood by Few, Essential for All

“Radical empowerment, for me, isn’t about handing people a seat at the table. It’s about making sure they know the seat is already theirs,” said Trenika Fields, Business Legal, AI Leader at Cisco, MIT Sloan EMBA Class of ’26. “I set the vision and I trust my team to execute in ways that are anchored in the mission and tied to real business outcomes. But trust without depth doesn’t work. That’s where leading with empathy comes in. It’s my secret sauce, and it has to be real. You can’t fake it. People know when it’s performative. Real empathy builds confidence, and confidence fuels bold, decisive execution. When people feel seen, trusted, and strategically aligned, they lead like builders, not bystanders. Strip that trust and empathy away, and radical disempowerment moves in fast. Voices go quiet. Momentum dies. Innovation flatlines. But when you get it right, you don’t just build teams. You build powerhouses that set the standard and raise the bar for everyone else.” Why, given how simple this is, is it so hard for senior leadership to do versus say? I worked in an environment years ago when “radical candor” was the theme du jour rather than “radical empowerment.” An executive over an executive over my boss was explaining radical candor, which very simply put, being constructive and forthright with empathy to help others grow. 


Banks Can Convert Messy Data into Unstoppable Growth

Banks recognize the potential in tapping a trove of customer data, much of it unstructured, as a tool to personalize interactions and become more proactive. They are sitting on a goldmine of unstructured information hidden in PDFs, scanned forms, call notes and emails — data that, once cleaned and organized, can unlock new business opportunities, says Drew Singer, head of product at Middesk. ... The ability to successfully turn data into insights often depends on clear parameters for how data is handled. This includes a shared understanding of who owns the data, how it will be managed and stored, and a defined governance structure — possibly through committees — for overseeing its use, Deutsch says. "If you don’t set these rules, once data starts flowing, you will lose control of it. You will most likely lose quality," he says. ... With the data governance structure firmly in place, FIs are positioned to use additional tools to garner action-oriented insights across the organization. Truist Client Pulse, for example, uses AI and machine learning to analyze customer feedback across channels. ... "We’ve got a population of teammates using the tool as it stands today, to better understand regional performance opportunities …what’s going well with certain solutions that we have, and where there are areas of opportunity to enhance experience and elevate satisfaction to drive to client loyalty," says Graziano. 


Securing Digital Supply Chains: Confronting Cyber Threats in Logistics Networks

Modern logistics networks are filled with connected devices — from IoT sensors tracking shipments and telematics in trucks, to automated sorting systems and industrial controls in smart warehouses and ports. This Internet of Things (IoT) revolution offers incredible efficiency and real-time visibility, but it also increases the attack surface. Each connected sensor, RFID reader, camera, or vehicle telemetry unit is essentially an internet entry point that could be exploited if not properly secured. The spread of IoT devices introduces new vulnerabilities that must be managed effectively. For example, a hacker who hijacks a vulnerable warehouse camera or temperature sensor might find a way into the larger corporate network. ... The tightly interwoven nature of modern supply chains amplifies the impact of any single cyber incident, highlighting the importance of robust cybersecurity measures. Companies are now digitally linked with vendors and logistics partners, sharing data and connecting systems to improve efficiency. However, this interdependence means that a security failure at one point can quickly spread outward. ... While large enterprises may invest heavily in cybersecurity, they often depend on smaller partners who might lack the same resources or maturity. Global supply chains can involve hundreds of suppliers and service providers with varying security levels. 


For OT Cyber Defenders, Lack of Data Is the Biggest Threat

Data in the OT and ICS world is transient, said Lee. Instructions - legitimate, or not - flow across the network. Once executed, they vanish. "If I don't capture it during the attack, it's gone," Lee said. Post-incident forensics is basically impossible without specialized monitoring tools already in place. "So for the companies that aren't doing that data collection, that monitoring, prior to the attacks, they have no chance at actually figuring out if a cyberattack was involved or not." And that is a problem when nation-state adversaries have pre-positioned themselves within the networks of critical infrastructure providers, apparently ready to pivot to OT exploitation in time of conflict. ... Even when critical infrastructure operators do capture OT monitoring data, the sheer complexity of modern industrial processes means that finding out what went wrong is difficult. The inability to make use of more detailed data is an indicator of immaturity in the OT security space, Bryson Bort told Information Security Media Group. "The way I summarize the OT space is, it's a generation behind traditional IT," said Bort, a U.S. Army veteran and founder of the non-profit ICS Village. Bort helps organize the annual Hack the Capitol event, but he makes his living selling security services to critical infrastructure owners and operators. Most operators still don't have visibility into the ICS devices on their work, Bort said. "What do I have? What assets are on my network?"


Cross-Border Compliance: Navigating Multi-Jurisdictional Risk with AI

The digital age has turned global expansion from an aspiration into a necessity. Yet, for companies operating across multiple countries, this opportunity comes wrapped in a Gordian knot of cross-border compliance. The sheer volume, complexity, and rapid change of multi-jurisdictional regulations—from GDPR and CCPA on data privacy to complex Anti-Money Laundering (AML) and financial reporting rules—pose an existential risk. What seems like a local detail in one jurisdiction may spiral into a costly mistake elsewhere. ... AI helps with cross-border compliance by automating risk management through real-time monitoring, analyzing vast datasets to detect fraud, and keeping up with constantly changing regulations. It navigates complex rules by using natural language processing (NLP) to interpret regulatory texts and automating tasks like document verification for KYC/KYB processes. By providing continuous, automated risk assessments and streamlining compliance workflows, AI reduces human error, improves efficiency, and ensures ongoing adherence to global requirements. AI, specifically through technologies like Machine Learning (ML) and Natural Language Processing (NLP), is the critical tool for cutting compliance costs by up to 50% while drastically improving accuracy and speed. AI and machine learning (ML) solutions, often referred to as RegTech, are streamlining compliance by automating tasks, enhancing data analysis, and providing real-time insights.


Best Practices for Building an AI-Powered OT Cybersecurity Strategy

One challenge in defending OT assets is that most industrial facilities still rely on decades-old hardware and software systems that were not designed with modern cybersecurity in mind. These legacy systems are often difficult to patch and contain documented vulnerabilities. Sophisticated adversaries know this and exploit these outdated systems as a point of entry. ... OT cybersecurity and regulatory compliance are tightly linked in manufacturing, but not interchangeable. Consider regulatory compliance the minimum bar you must clear to stay legally and contractually safe. At the same time, cybersecurity is the continuous effort you must take to protect your systems and operations. Manufacturers increasingly must prove OT cyber resilience to customers, partners, and regulators. A strong cybersecurity posture helps ensure certifications are passed, contracts are won, and reputations are protected. ... AI is a powerful tool for bolstering OT cybersecurity strategies by overcoming the common limitations of traditional, rule-based defenses. AI, whether machine learning, predictive AI, or agentic AI, provides advanced capabilities to help defenders detect threats, automate responses, manage assets, and enhance vulnerability management. ... Human oversight and expertise are vital for ensuring AI quality and contextual accuracy, especially in safety-critical OT environments. 


Training Data Preprocessing for Text-to-Video Models

Getting videos ready for a dataset is not merely a checkbox task - it’s a demanding, time-consuming process that can make or break the final model. At this stage, you’re typically dealing with a large collection of raw footage with no labels, no descriptions, and at best limited metadata like resolution or duration. If the sourcing process was well-structured, you might have videos grouped by domain or category, but even then, they’re not ready for training. The problems are straightforward but critical: there’s no guiding information (captions or prompts) for the model to learn from, and the clips are often far too long for most generative architectures, which tend to work with a context window (length of the video, like number of tokens for Large Language Models) measured in tens of seconds, not minutes. ... It might seem like the fastest approach is to label every scene you have. In reality, that’s a direct route to poor results. After all the previous steps, a dataset is rarely clean: it almost always contains broken clips, low-quality frames, and clusters of near-identical segments. The filtering stage exists to strip out this noise, leaving the model only with content worth learning from. This ensures that the model doesn’t spend time on data that won’t improve its output. ... Building a proper text-to-video dataset is an extremely complex task. However, it is impossible to build a text-to-video generation model without a good dataset.


Putting Design Thinking into Practice: A Step-by-Step Guide

The key aim of this part of the design process is to frame your problem statement. This will guide the rest of your process. Once you’ve gathered insights from your users, the next step is to distil everything down to the real issue. There are many ways to do this, but if you’ve spoken to several users, start by analysing what they said to find patterns — what themes keep coming up, and what challenges do they all seem to face? ... Once you’ve got your problem statement, the next step is to start coming up with ideas. This is the fun part! The aim of this part of idea generation is not to find the perfect idea straight away, but to come up with as many ideas as possible. Quantity matters more than quality right now. Start by brainstorming everything that comes to mind, no matter how unrealistic it sounds. At this point, quantity matters more than quality — you can always refine later. Write your ideas down, sketch them, or talk them through with friends or teammates. You might be surprised at how one silly suggestion sparks a genuinely good idea. ... Testing is the “last” stage of the design process. I say last with a bit of hesitation, because while it is technically last on the diagram, you are guaranteed to get a lot of feedback that will require you to go back to earlier stages of the design process and revisit ideas.


Beyond Resilience: How AI and Digital Twin technology are rewriting the rules of supply chain recovery

For decades, supply chain resilience meant having backup plans, alternate suppliers, safety stock, and crisis playbooks. That model doesn’t hold anymore. In a post-pandemic world shaped by trade wars, climate volatility, and technology shocks, disruptions are neither rare nor isolated. They’re structural. ... The KPIs of resilience have evolved. In most companies, traditional metrics like on-time delivery or supplier lead time fail to capture the system’s true flexibility. Modern analytics teams are redefining the measurement architecture around three key indicators: Mean time to recovery (MTTR): the time between initial disruption and full operational stability;  Conditional value-at-risk (CVaR): a probabilistic measure of financial exposure under extreme stress; Supply network resilience index (SNRI): a composite score tracking substitution agility and cross-tier visibility. ... A hidden benefit of this new approach is its environmental alignment. When Schneider Electric built a multi-tier AI twin for its Asia-Pacific operations, it discovered that optimizing for resilience, diversifying ports, balancing lead times, and automating inventory allocation also reduced carbon intensity per unit shipped by 12%; This was not the goal, but it proved that sustainability and resilience share a common denominator: Efficiency. The smarter the network, the smaller its waste footprint. In boardrooms today, that realization is quietly rewriting ESG strategy.

Daily Tech Digest - August 15, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


DevSecOps 2.0: How Security-First DevOps Is Redefining Software Delivery

DevSecOps 2.0 is a true security-first revolution. This paradigm shift transforms software security into a proactive enabler, leveraging AI and policy-as-code to automate safeguards at scale. Security tools now blend seamlessly into developer workflows, and continuous compliance ensures real-time auditing. With ransomware, supply chain attacks, and other attacks on the rise, there is a need for a different approach to delivering resilient software. ... It marks a transformative approach to software development, where security is the foundation of the entire lifecycle. This evolution ensures proactive security that works to identify and neutralize threats early. ... AI-driven security is central to DevSecOps 2.0, which harnesses the power of artificial intelligence to transform security from a reactive process into a proactive defense strategy. By analyzing vast datasets, including security logs, network traffic, and code commit patterns, AI can detect subtle anomalies and predict potential threats before they materialize. This predictive capability enables teams to identify risks early, streamlining threat detection and facilitating automated remediation. For instance, AI can analyze commit patterns to predict code sections likely to contain vulnerabilities, allowing for targeted testing and prevention. 


What CIOs can do when AI boosts performance but kills motivation

“One of the clearest signs is copy-paste culture,” Anderson says. “When employees use AI output as-is, without questioning it or tailoring it to their audience, that’s a sign of disengagement. They’ve stopped thinking critically.” To prevent this, CIOs can take a closer look at how teams actually use AI. Honest feedback from employees can help, but there’s often a gap between what people say they use AI for and how they actually use it in practice, so trying to detect patterns of copy-paste usage can help improve workflows. CIOs should also pay attention to how AI affects roles, identities, and team dynamics. When experienced employees feel replaced, or when previously valued skills are bypassed, morale can quietly drop, even if productivity remains high on paper. “In one case, a senior knowledge expert, someone who used to be the go-to for tough questions, felt displaced when leadership started using AI to get direct answers,” Anderson says. “His motivation dropped because he felt his value was being replaced by a tool.” Over time, this expert started to use AI strategically, and saw it could reduce the ad-hoc noise and give him space for more strategic work. “That shift from threatened to empowered is something every leader needs to watch for and support,” he adds.


That ‘cheap’ open-source AI model is actually burning through your compute budget

The inefficiency is particularly pronounced for Large Reasoning Models (LRMs), which use extended “chains of thought” to solve complex problems. These models, designed to think through problems step-by-step, can consume thousands of tokens pondering simple questions that should require minimal computation. For basic knowledge questions like “What is the capital of Australia?” the study found that reasoning models spend “hundreds of tokens pondering simple knowledge questions” that could be answered in a single word. ... The research revealed stark differences between model providers. OpenAI’s models, particularly its o4-mini and newly released open-source gpt-oss variants, demonstrated exceptional token efficiency, especially for mathematical problems. The study found OpenAI models “stand out for extreme token efficiency in math problems,” using up to three times fewer tokens than other commercial models. ... The findings have immediate implications for enterprise AI adoption, where computing costs can scale rapidly with usage. Companies evaluating AI models often focus on accuracy benchmarks and per-token pricing, but may overlook the total computational requirements for real-world tasks. 


AI Agents and the data governance wild west

Today, anyone from an HR director to a marketing intern can quickly build and deploy an AI agent simply using Copilot Studio. This tool is designed to be accessible and quick, making it easy for anyone to play around with and launch a sophisticated agent in no time at all. But when these agents are created outside of the IT department, most users aren’t thinking about data classification or access controls, and they become part of a growing shadow IT problem. ... The problem is that most users will not be thinking like a developer with governance in mind when creating their own agents. Therefore, policies must be imposed to ensure that key security steps aren’t skipped in the rush to deploy a solution. A new layer of data governance must be considered with steps that include configuring data boundaries, restricting who can access what data according to job role and sensitivity level, and clearly specifying which data resources the agent can pull from. AI agents should be built for purpose, using principles of least privilege. This will help avoid a marketing intern having access to the entire company’s HR file. Just like any other business-critical application, it needs to be adequately tested and ‘red-teamed’. Perform penetration testing to identify what data the agent can surface, to who, and how accurate the data is.


Monitoring microservices: Best practices for robust systems

Collecting extensive amounts of telemetry data is most beneficial if you can combine, visualize and examine it successfully. A unified observability stack is paramount. By integrating tools like middleware that work together seamlessly, you create a holistic view of your microservices ecosystem. These unified tools ensure that all your telemetry information — logs, traces and metrics — is correlated and accessible from a single pane of glass, dramatically decreasing the mean time to detect (MTTD) and mean time to resolve (MTTR) problems. The energy lies in seeing the whole photograph, no longer just remote points. ... Collecting information is good, but acting on it is better. Define significant service level objectives (SLOs) that replicate the predicted performance and reliability of your offerings.  ... Monitoring microservices effectively is an ongoing journey that requires a commitment to standardization of data, using the right tools and a proactive mindset. By utilizing standardized observability practices, adapting a unified observability stack, continuously monitoring key metrics, placing meaningful SLOs and allowing enhanced root cause analysis, you may construct a strong and resilient microservices structure that truly serves your business desires and delights your customers. 


How military leadership prepares veterans for cybersecurity success

After dealing with extremely high-pressure environments, in which making the wrong decision can cost lives, veterans and reservists have little trouble dealing with the kinds of risks found in the world of business, such as threats to revenue, brand value and jobs. What’s more, the time-critical mission mindset so essential within the military is highly relevant within cybersecurity, where attacks and breaches must be dealt with confidently, rapidly and calmly. In the armed forces, people often find themselves in situations so intense that Maslow’s hierarchy of needs is flipped on its head. You’re not aiming for self-actualization or more advanced goals, but simply trying to keep the team alive and maintain essential operations. ... Military experience, on the other hand, fosters unparalleled trust, honesty and integrity within teams. Armed forces personnel must communicate really difficult messages. Telling people that many of them may die within hours demands a harsh honesty, but it builds trust. Combine this with an ability to achieve shared goals, and military leaders inspire others to follow them regardless of the obstacles. So veterans bring blunt honesty, communication, and a mission focus to do what is needed to succeed. These are all characteristics that are essential in cybersecurity, where you have to call out critical risks that others might avoid discussing.


Reclaiming the Architect’s Role in the SDLC

Without an active architect guiding the design and implementation, systems can experience architectural drift, a term that describes the gradual divergence from the intended system design, leading to a fragmented and harder-to-manage system. In the absence of architectural oversight, development teams may optimize for individual tasks at the expense of the system’s overall performance, scalability, and maintainability. ... The architect is primarily accountable for the overall design and ensuring the system’s quality, performance, scalability, and adaptability to meet changing demands. However, relying on outdated practices, like manually written and updated design documents, is no longer effective. The modern software landscape, with multiple services, external resources, and off-the-shelf integrations, makes such documents stale almost as soon as they’re written. Consequently, architects must use automated tools to document and monitor live system architectures. These tools can help architects identify potential issues almost in real time, which allows them to proactively address problems and ensure design integrity throughout the development process. These tools are especially useful in the design stage, allowing architects to reclaim the role they once possessed and the responsibilities that come with it.


Is Vibe Coding Ready for Prime Time?

As the vibe coding ecosystem matures, AI coding platforms are rolling out safeguards like dev/prod separation, backups/rollback, single sign-on, and SOC 2 reporting, yet audit logging is still not uniform across tools. But until these enterprise-grade controls become standard, organizations must proactively build their own guardrails to ensure AI-generated code remains secure, scalable and trustworthy. This calls for a risk-based approach, one that adjusts oversight based on the likelihood and impact of potential risks. Not all use cases carry the same weight. Some are low-stakes and well-suited for experimentation, while others may introduce serious security, regulatory or operational risks. By focusing controls where they’re most needed, a risk-based approach helps protect critical systems while still enabling speed and innovation in safer contexts. ... To effectively manage the risks of vibe coding, teams need to ask targeted questions that reflect the unique challenges of AI-generated code. These questions help determine how much oversight is needed and whether vibe coding is appropriate for the task at hand. ... Vibe coding unlocks new ways of thinking for software development. However, it also shifts risk upstream. The speed of code generation doesn’t eliminate the need for review, control and accountability. In fact, it makes those even more important.


7 reasons the SOC is in crisis — and 5 steps to fix it

The problem is that our systems verify accounts, not actual people. Once an attacker assumes a user’s identity through social engineering, they can often operate within normal parameters for extended periods. Most detection systems aren’t sophisticated enough to recognise that John Doe’s account is being used by someone who isn’t actually John Doe. ... In large enterprises with organic system growth, different system owners, legacy environments, and shadow SaaS integrations, misconfigurations are inevitable. No vulnerability scanner will flag identity systems configured inconsistently across domains, cloud services with overly permissive access policies, or network segments that bypass security controls. These misconfigurations often provide attackers with the lateral movement opportunities they need once they’ve gained initial access through compromised credentials. Yet most organizations have no systematic approach to identifying and remediating these architectural weaknesses. ... External SOC providers offer round-the-clock monitoring and specialised expertise, but they lack the organizational context that makes detection effective. They don’t understand your business processes, can’t easily distinguish between legitimate and suspicious activities, and often lack the authority to take decisive action.


One Network: Cloud-Agnostic Service and Policy-Oriented Network Architecture

The goal of One Network is to enable uniform policies across services. To do so, we are looking to overcome the complexities of heterogeneous networking, different language runtimes, and the coexistence of monolith services and microservices. These complexities span multiple environments, including public, private, and multi-cloud setups. The idea behind One Network is to simplify the current state of affairs by asking, "Why do I need so many networks? Can I have one network?" ... One Network enables you to manage such a service by applying governance, orchestrating policy, and managing the small independent services. Each of these microservices is imagined as a service endpoint. This enables orchestrating and grouping these service endpoints without application developers needing to modify service implementation, so everything is done on a network. There are three ways to manage these service endpoints. The first is the classic model: you add a load balancer before a workload, such as a shopping cart service running in multiple regions, and that becomes your service endpoint. ... If you start with a flat network but want to create boundaries, you can segment by exposing only certain services and keeping others hidden. 

Daily Tech Digest - March 03, 2025


Quote for the day:

“If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work.” -- Thomas J. Watson




How to Create a Winning AI Strategy

“A winning AI strategy starts with a clear vision of what problems you’re solving and why,” says Surace. “It aligns AI initiatives with business goals, ensuring every project delivers measurable value. And it builds in agility, allowing the organization to adapt as technology and market conditions evolve.” ... AI is also not a solution to all problems. Like any other technology, it’s simply a tool that needs to be understood and managed. “Proper AI strategy adoption will require iteration, experimentation, and, inevitably, failure to end up at real solutions that move the needle. This is a process that will require a lot of patience,” says Lionbridge’s Rowlands-Rees. “[E]veryone in the organization needs to understand and buy in to the fact that AI is not just a passing fad -- it’s the modern approach to running a business. The companies that don’t embrace AI in some capacity will not be around in the future to prove everyone else wrong.” Organizations face several challenges when implementing AI strategies. For example, regulatory uncertainty is a significant hurdle and navigating the complex and evolving landscape of AI regulations across different jurisdictions can be daunting. ... “There’s a gap between AI’s theoretical potential and its practical business application. Companies invest millions in AI initiatives that prioritize speed to market over actual utility,” Palmer says.


Work-Life Balance: A Practitioner Viewpoint

Organisation policymakers must ensure a well-funded preventive health screening at all levels so those with identified health risks can be advised and guided suitably on their career choices. They can be helped to step back on their career accelerators, and their needs can be accommodated in the best possible manner. This requires a mature HR policy-making and implementation framework where identifying problems and issues does not negatively impact the employees' careers. Deploying programs that help employees identify and overcome stress issues will be beneficial. A considerable risk for individuals is adopting negative means like alcohol, tobacco, or even getting into a shell to address their stress issues, and that can take an enormous toll on their well-being. Kindling purposeful passion alongside work is yet another strategy. In today's world, an urgent task to be assigned is just a phone call away. One can have some kind of purposeful passion that keeps us engaged alongside our work. This passion will have its purpose; one can fall back on it to keep oneself together and draw inspiration. Purposeful passion can include things such as acquiring a new skill in a sport, learning to play a musical instrument, learning a new dance form, playing with kids, spending quality time with family members in deliberate and planned ways, learning meditation, environment protection and working for other social causes.


The 8 new rules of IT leadership — and what they replace

The CIO domain was once confined to the IT department. But to be tightly partnered and co-lead with the business, CIOs must increasingly extend their expertise across all departments. “In the past they weren’t as open to moving out of their zone. But the role is becoming more fluid. It’s crossing product, engineering, and into the business,” says Erik Brown, an AI and innovation leader in the technology and experience practice at digital services firm West Monroe. Brown compares this new CIO to startup executives, who have experience and knowledge across multiple functional areas, who may hold specific titles but lead teams made up of workers from various departments, and who will shape the actual strategy of the company. “The CIOs are not only seeing strategy, but they will inform it; they can shape where the business is moving, and then they can take that to their teams and help them brainstorm how to support that. And that helps build more impactful teams,” Brown says. He continues: “You look at successful leaders of today and they’re all going to have a blended background. CIOs are far broader in their understanding, and where they’re more shallow, they’ll surround themselves with deputies that have that depth. They’re not going to assume they’re an expert in everything. So they may have an engineering background, for example, and they’ll surround themselves with those who are more experienced in that.”


Managing AI APIs: Best Practices for Secure and Scalable AI API Consumption

Managing AI APIs presents unique challenges compared to traditional APIs. Unlike conventional APIs that primarily facilitate structured data exchange, AI APIs often require high computational resources, dynamic access control and contextual input filtering. Moreover, large language models (LLMs) introduce additional considerations such as prompt engineering, response validation and ethical constraints that demand a specialized API management strategy. To effectively manage AI APIs, organizations need specialized API management strategies that can address unique challenges such as model-specific rate limiting, dynamic request transformations, prompt handling, content moderation and seamless multi-model routing, ensuring secure, efficient and scalable AI consumption. ... As organizations integrate multiple external AI providers, egress AI API management ensures structured, secure and optimized consumption of third-party AI services. This includes governing AI usage, enhancing security, optimizing cost and standardizing AI interactions across multiple providers. Below are some best practices for exposing AI APIs via egress gateways: Optimize Model Selection: Dynamically route requests to AI models based on cost, latency or regulatory constraints. 


Charting the AI-fuelled evolution of embedded analytics

First of all, the technical requirements are high. To fit today’s suite of business tools, embedded analytics have to be extremely fast, lightweight, and very scalable, otherwise they risk dragging down the performance of the entire app. “As development and the web moves to single-page apps using frameworks like Angular and React, it becomes more and more critical that the embedded objects are lightweight, efficient, and scalable. In terms of embedded implementations for the developer, that’s probably one of the biggest things to look out for,” advises Perez. On top of that, there’s security, which is “another gigantic problem and headache for everybody,” observes Perez. “Usually, the user logs into the hosting app and then they need to query data relevant to them, and that involves a security layer.” Balancing the need for fast access to relevant data against the needs for compliance with data privacy regulations and security for your own proprietary information can be a complex juggling act. ... Additionally, the main benefit of embedded analytics is that it makes insights easily accessible to line-of-business users. “It should be very easy to use, with no prior training requirements, it should accept and understand all kinds of requests, and more importantly, it needs to seamlessly work on the company’s internal data,” says Perez.


The Ransomware Payment Ban – Will It Work?

A complete, although targeted, ban on ransom payments for public sector organisations is intended to remove cybercriminals’ financial motivation. However, without adequate investment in resilience, these organisations may be unable to recover as quickly as they need to, putting essential services at risk. Many NHS healthcare providers and local councils are already dealing with outdated infrastructure and cybersecurity staff shortages. If they are expected to withstand ransomware attacks without the option of paying, they must be given the resources, funding, and support to defend themselves and recover effectively. A payment ban may disrupt criminal operations in the short term. However, it doesn’t address the root of the issue – the attacks will persist, and vulnerable systems remain an open door. Cybercriminals are adaptive. If one revenue stream is blocked, they’ll find other ways to exploit weaknesses, whether through data theft, extortion, or targeting less-regulated entities. The requirement for private organisations to report payment intentions before proceeding aims to help authorities track ransomware trends. However, this approach risks delaying essential decisions in high-pressure situations. During a ransomware crisis, decisions must often be made in hours, if not minutes. Adding bureaucratic hurdles to these critical moments could exacerbate operational chaos.


The Modern CIO: Architect of the Intelligent Enterprise

Moving forward, traditional technology-driven CIOs will likely continue to lose leadership influence and C-suite presence as more strategic, business-focused CxOs move in. “There is a growing divergence. And the CIO that plays more of a modern CTO role will not have a set at the table,” Clydesdale-Cotter said. This increased business focus demands CIOs not only have a broad and deep technical understanding of how new technologies impact the nature of their company’s relationship with the broader market and impact on how the business operates, but also command fluency in the vertical markets of their business and not only accountability for the ROI on digital initiatives but the broader success of the business as well. There’s probably no technology having a more significant impact today than AI adoption. ... The maturation of generative AI is moving CIOs from managing pilot deployments to enterprise-scale initiatives. Starting this year, analysts expect about half of CIOs to increasingly prioritize the cultivation of fostering data-centric cultures, ensuring clean, accessible datasets to train their AI models. However, challenges persist: a 2024 Deloitte survey found that 59% of employees resist AI adoption due to job security fears, requiring CIOs to lead change management programs that emphasize upskilling.


7 Steps to Building a Smart, High-Performing Team

Hiring is just the beginning — training is where the real magic happens. One of the biggest mistakes I see business owners make is throwing new hires into the deep end without proper onboarding. ... A strong team is built on clarity. Employees should know exactly what is expected of them from day one. Clear role definitions, performance benchmarks and a structured feedback system help employees stay aligned with company goals. Peter Drucker, often called the father of modern management, once said, "What gets measured gets managed." Establishing key performance indicators (KPIs) ensures that every team member understands how their work contributes to the company's broader objectives. ... Just like in soccer, some players will need a yellow card — a warning that performance needs to improve. The best teams address underperformance before it becomes a chronic issue. A well-structured performance review system, including monthly check-ins and real-time feedback, helps keep employees on track. A study from MIT Sloan Management Review found that teams that receive continuous feedback perform 22% better than those with annual-only reviews. If an employee continues to underperform despite clear feedback and support, it may be time for the red card — letting them go. 


How eBPF is changing container networking

eBPF is revolutionary because it works at the kernel level. Even though containers on the same host have their own isolated view of user space, says Rice, all containers and the host share the same kernel. Applying networking, observability, or security features here makes them instantly available to all containerized applications with little overhead. “A container doesn’t even need to be restarted, or reconfigured, for eBPF-based tools to take effect,” says Rice. Because eBPF operates at the kernel level to implement network policies and operations such as packet routing, filtering, and load balancing, it’s better positioned than other cloud-native networking technologies that work in the user space, says IDC’s Singh. ... “eBPF comes with overhead and complexity that should not be overlooked, such as kernel requirements, which often require newer kernels, additional privileges to run the eBPF programs, and difficulty debugging and troubleshooting when things go wrong,” says Sun. A limited pool of eBPF expertise is available for such troubleshooting, adding to the hesitation. “It is reasonable for service mesh projects to continue using and recommending iptables rules,” she says. Meta’s use of Cilium netkit across millions of containers shows eBPF’s growing usage and utility.


If Architectural Experimentation Is So Great, Why Aren’t You Doing It?

Architectural experimentation is important for two reasons: For functional requirements, MVPs are essential to confirm that you understand what customers really need. Architectural experiments do the same for technical decisions that support the MVP; they confirm that you understand how to satisfy the quality attribute requirements for the MVP. Architectural experiments are also important because they help to reduce the cost of the system over time. This has two parts: you will reduce the cost of developing the system by finding better solutions, earlier, and by not going down technology paths that won’t yield the results you want. Experimentation also pays for itself by reducing the cost of maintaining the system over time by finding more robust solutions. Ultimately running experiments is about saving money - reducing the cost of development by spending less on developing solutions that won’t work or that will cost too much to support. You can’t run experiments on every architectural decision and eliminate the cost of all unexpected changes, but you can run experiments to reduce the risk of being wrong about the most critical decisions. While stakeholders may not understand the technical aspects of your experiments, they can understand the monetary value.


Daily Tech Digest - February 04, 2025


Quote for the day:

"Develop success from failures. Discouragement and failure are two of the surest stepping stones to success." -- Dale Carnegie


Technology skills gap plagues industries, and upskilling is a moving target

“The deepening threat landscape and rapidly evolving high-momentum technologies like AI are forcing organizations to move with lightning speed to fill specific gaps in their job architectures, and too often they are stumbling,” said David Foote, chief analyst at consultancy Foote Partners. To keep up with the rapidly changing landscape, Gartner suggests that organizations invest in agile learning for tech teams. “In the context of today’s AI-fueled accelerated disruption, many business leaders feel learning is too slow to respond to the volume, variety and velocity of skills needs,” said Chantal Steen, a senior director in Gartner’s HR practice. “Learning and development must become more agile to respond to changes faster and deliver learning more rapidly and more cost effectively.” Studies from staffing firm ManpowerGroup, hiring platform Indeed, and Deloitte consulting show that tech hiring will focus on candidates with flexible skills to meet evolving demands. “Employers know a skilled and adaptable workforce is key to navigating transformation, and many are prioritizing hiring and retaining people with in-demand flexible skills that can flex to where demand sits,” said Jonas Prising, ManpowerGroup chair and CEO.


Mixture of Experts (MoE) Architecture: A Deep Dive & Comparison of Top Open-Source Offerings

The application of MoE to open-source LLMs offers several key advantages. Firstly, it enables the creation of more powerful and sophisticated models without incurring the prohibitive costs associated with training and deploying massive, single-model architectures. Secondly, MoE facilitates the development of more specialized and efficient LLMs, tailored to specific tasks and domains. This specialization can lead to significant improvements in performance, accuracy, and efficiency across a wide range of applications, from natural language translation and code generation to personalized education and healthcare. The open-source nature of MoE-based LLMs promotes collaboration and innovation within the AI community. By making these models accessible to researchers, developers, and businesses, MoE fosters a vibrant ecosystem of experimentation, customization, and shared learning. ... Integrating MoE architecture into open-source LLMs represents a significant step forward in the evolution of artificial intelligence. By combining the power of specialization with the benefits of open-source collaboration, MoE unlocks new possibilities for creating more efficient, powerful, and accessible AI models that can revolutionize various aspects of our lives.


The DeepSeek Disruption and What It Means for CIOs

The emergence of DeepSeek has also revived a long-standing debate about open-source AI versus proprietary AI. Open-source AI is not a silver bullet. CIOs need to address critical risks as open-source AI models, if not secured properly, can be exposed to grave cyberthreats and adversarial attacks. While DeepSeek currently shows extraordinary efficiency, it requires an internal infrastructure, unlike GPT-4, which can seamlessly scale on OpenAI's cloud. Open-source AI models lack support and skills, thereby mandating users to build their own expertise, which could be demanding. "What happened with DeepSeek is actually super bullish. I look at this transition as an opportunity rather than a threat," said Steve Cohen, founder of Point72. ... The regulatory non-compliance adds another challenge as many governments restrict and disallow sensitive enterprise data from being processed by Chinese technologies. A possibility of potential backdoor can't be ruled out and this could open the enterprises to additional risks. CIOs need to conduct extensive security audits before deploying DeepSeek. rganizations can implement safeguards such as on-premises deployment to avoid data exposure. Integrating strict encryption protocols can help the AI interactions remain confidential, and performing rigorous security audits ensure the model's safety before deploying it into business workflows.


Why GreenOps will succeed where FinOps is failing

The cost-control focus fails to engage architects and engineers in rethinking how systems are designed, built and operated for greater efficiency. This lack of engagement results in inertia and minimal progress. For example, the database team we worked with in an organization new to the cloud launched all the AWS RDS database servers from dev through production, incurring a $600K a month cloud bill nine months before the scheduled production launch. The overburdened team was not thinking about optimizing costs, but rather optimizing their own time and getting out of the way of the migration team as quickly as possible. ... GreenOps — formed by merging FinOps, sustainability and DevOps — addresses the limitations of FinOps while integrating sustainability as a core principle. Green computing contributes to GreenOps by emphasizing energy-efficient design, resource optimization and the use of sustainable technologies and platforms. This foundational focus ensures that every system built under GreenOps principles is not only cost-effective but also minimizes its environmental footprint, aligning technological innovation with ecological responsibility. Moreover, we’ve found that providing emissions feedback to architects and engineers is a bigger motivator than cost to inspire them to design more efficient systems and build automation to shut down underutilized resources.


Best Practices for API Rate Limits and Quotas

Unlike short-term rate limits, the goal of quotas is to enforce business terms such as monetizing your APIs and protecting your business from high-cost overruns by customers. They measure customer utilization of your API over longer durations, such as per hour, per day, or per month. Quotas are not designed to prevent a spike from overwhelming your API. Rather, quotas regulate your API’s resources by ensuring a customer stays within their agreed contract terms. ... Even a protection mechanism like rate limiting could have errors. For example, a bad network connection with Redis could cause reading rate limit counters to fail. In such scenarios, it’s important not to artificially reject all requests or lock out users even though your Redis cluster is inaccessible. Your rate-limiting implementation should fail open rather than fail closed, meaning all requests are allowed even though the rate limit implementation is faulting. This also means rate limiting is not a workaround to poor capacity planning, as you should still have sufficient capacity to handle these requests or even design your system to scale accordingly to handle a large influx of new requests. This can be done through auto-scale, timeouts, and automatic trips that enable your API to still function.


Protecting Ultra-Sensitive Health Data: The Challenges

Protecting ultra-sensitive information "is an incredibly confusing and complicated and evolving part of the law," said regulatory attorney Kirk Nahra of the law firm WilmerHale. "HIPAA generally does not distinguish between categories of health information," he said. "There are exceptions - including the recent Dobbs rule - but these are not fundamental in their application, he said. Privacy protections related to abortion procedures are perhaps the most hotly debated type of patient information. For instance, last June - in response to the June 2022 Supreme Court's Dobbs ruling, which overturned the national right to abortion - the Biden administration's U.S. Department of Health and Human Services modified the HIPAA Privacy Rule to add additional safeguards for the access, use and disclosure of reproductive health information. The rule is aimed at protecting women from the use or disclosure of their reproductive health information when it is sought to investigate or impose liability on individuals, healthcare providers or others who seek, obtain, provide or facilitate reproductive healthcare that is lawful under the circumstances in which such healthcare is provided. But that rule is being challenged in federal court by 15 state attorneys general seeking to revoke the regulations.


Evolving threat landscape, rethinking cyber defense, and AI: Opportunties and risk

Businesses are firmly in attackers’ crosshairs. Financially motivated cybercriminals conduct ransomware attacks with record-breaking ransoms being paid by companies seeking to avoid business interruption. Others, including nation-state hackers, infiltrate companies to steal intellectual property and trade secrets to gain commercial advantage over competitors. Further, we regularly see critical infrastructure being targeted by nation-state cyberattacks designed to act as sleeper cells that can be activated in times of heightened tension. Companies are on the back foot. ... As zero trust disrupts obsolete firewall and VPN-based security, legacy vendors are deploying firewalls and VPNs as virtual machines in the cloud and calling it zero trust architecture. This is akin to DVD hardware vendors deploying DVD players in a data center and calling it Netflix! It gives a false sense of security to customers. Organizations need to make sure they are really embracing zero trust architecture, which treats everyone as untrusted and ensures users connect to specific applications or services, rather than a corporate network. ... Unfortunately, the business world’s harnessing of AI for cyber defense has been slow compared to the speed of threat actors harnessing it for attacks. 


Six essential tactics data centers can follow to achieve more sustainable operations

By adjusting energy consumption based on real-time demand, data centers can significantly enhance their operational efficiency. For example, during periods of low activity, power can be conserved by reducing energy use, thus minimizing waste without compromising performance. This includes dynamic power management technologies in switch and router systems, such as shutting down unused line cards or ports and controlling fan speeds to optimize energy use based on current needs. Conversely, during peak demand, operations can be scaled up to meet increased requirements, ensuring consistent and reliable service levels. Doing so not only reduces unnecessary energy expenditure, but also contributes to sustainability efforts by lowering the environmental impact associated with energy-intensive operations. ... Heat generated from data center operations can be captured and repurposed to provide heating for nearby facilities and homes, transforming waste into a valuable resource. This approach promotes a circular energy model, where excess heat is redirected instead of discarded, reducing the environmental impact. Integrating data centers into local energy systems enhances sustainability and offers tangible benefits to surrounding areas and communities whilst addressing broader energy efficiency goals.


The Engineer’s Guide to Controlling Configuration Drift

“Preventing configuration drift is the bedrock for scalable, resilient infrastructure,” comments Mayank Bhola, CTO of LambdaTest, a cloud-based testing platform that provides instant infrastructure. “At scale, even small inconsistencies can snowball into major operational inefficiencies. We encountered these challenges [user-facing impact] as our infrastructure scaled to meet growing demands. Tackling this challenge head-on is not just about maintaining order; it’s about ensuring the very foundation of your technology is reliable. And so, by treating infrastructure as code and automating compliance, we at LambdaTest ensure every server, service, and setting aligns with our growth objectives, no matter how fast we scale. Adopting drift detection and remediation strategies is imperative for maintaining a resilient infrastructure. ... The policies you set at the infrastructure level, such as those for SSH access, add another layer of security to your infrastructure. Ansible allows you to define policies like removing root access, changing the default SSH port, and setting user command permissions. “It’s easy to see who has access and what they can execute,” Kampa remarks. “This ensures resilient infrastructure, keeping things secure and allowing you to track who did what if something goes wrong.”


Strategies for mitigating bias in AI models

The need to address bias in AI models stems from the fundamental principle of fairness. AI systems should treat all individuals equitably, regardless of their background. However, if the training data reflects existing societal biases, the model will likely reproduce and even exaggerate those biases in its outputs. For instance, if a facial recognition system is primarily trained on images of one demographic, it may exhibit lower accuracy rates for other groups, potentially leading to discriminatory outcomes. Similarly, a natural language processing model trained on predominantly Western text may struggle to understand or accurately represent nuances in other languages and cultures. ... Incorporating contextual data is essential for AI systems to provide relevant and culturally appropriate responses. Beyond basic language representation, models should be trained on datasets that capture the history, geography, and social issues of the populations they serve. For instance, an AI system designed for India should include data on local traditions, historical events, legal frameworks, and social challenges specific to the region. This ensures that AI-generated responses are not only accurate but also culturally sensitive and context-aware. Additionally, incorporating diverse media formats such as text, images, and audio from multiple sources enhances the model’s ability to recognise and adapt to varying communication styles.

Daily Tech Digest - December 26, 2024

Best Practices for Managing Hybrid Cloud Data Governance

Kausik Chaudhuri, CIO of Lemongrass, explains monitoring in hybrid-cloud environments requires a holistic approach that combines strategies, tools, and expertise. “To start, a unified monitoring platform that integrates data from on-premises and multiple cloud environments is essential for seamless visibility,” he says. End-to-end observability enables teams to understand the interactions between applications, infrastructure, and user experience, making troubleshooting more efficient. ... Integrating legacy systems with modern data governance solutions involves several steps. Modern data governance systems, such as data catalogs, work best when fueled with metadata provided by a range of systems. “However, this metadata is often absent or limited in scope within legacy systems,” says Elsberry. Therefore, an effort needs to be made to create and provide the necessary metadata in legacy systems to incorporate them into data catalogs. Elsberry notes a common blocking issue is the lack of REST API integration. Modern data governance and management solutions typically have an API-first approach, so enabling REST API capabilities in legacy systems can facilitate integration. “Gradually updating legacy systems to support modern data governance requirements is also essential,” he says.


These Founders Are Using AI to Expose and Eliminate Security Risks in Smart Contracts

The vulnerabilities lurking in smart contracts are well-known but often underestimated. “Some of the most common issues include Hidden Mint functions, where attackers inflate token supply, or Hidden Balance Updates, which allow arbitrary adjustments to user balances,” O’Connor says. These aren’t isolated risks—they happen far too frequently across the ecosystem. ... “AI allows us to analyze huge datasets, identify patterns, and catch anomalies that might indicate vulnerabilities,” O’Connor explains. Machine learning models, for instance, can flag issues like reentrancy attacks, unchecked external calls, or manipulation of minting functions—and they do it in real-time. “What sets AI apart is its ability to work with bytecode,” he adds. “Almost all smart contracts are deployed as bytecode, not human-readable code. Without advanced tools, you’re essentially flying blind.” ... As blockchain matures, smart contract security is no longer the sole concern of developers. It’s an industry-wide challenge that impacts everyone, from individual users to large enterprises. DeFi platforms increasingly rely on automated tools to monitor contracts and secure user funds. Centralized exchanges like Binance and Coinbase assess token safety before listing new assets. 


Three best change management practices to take on board in 2025

For change management to truly succeed, companies need to move from being change-resistant to change-ready. This means building up "change muscles" -- helping teams become adaptable and comfortable with change over the long term. For Mel Burke, VP of US operations at Grayce, the key to successful change is speaking to both the "head" and the "heart" of your stakeholders. Involve employees in the change process by giving them a voice and the ability to shape it as it happens. ... Change management works best when you focus on the biggest risks first and reduce the chance of major disruptions. Dedman calls this strategy "change enablement," where change initiatives are evaluated and scored on critical factors like team expertise, system dependencies, and potential customer impact. High-scorers get marked red for immediate attention, while lower-risk ones stay green for routine monitoring to keep the process focused and efficient. ... Peter Wood, CTO of Spectrum Search, swears by creating a "success signals framework" that combines data-driven metrics with culture-focused indicators. "System uptime and user adoption rates are crucial," he notes, "but so are team satisfaction surveys and employee retention 12-18 months post-change." 


Corporate Data Governance: The Cornerstone of Successful Digital Transformation

While traditional data governance focuses on the continuous and tactical management of data assets – ensuring data quality, consistency, and security – corporate data governance elevates this practice by integrating it with the organization’s overall governance framework and strategic objectives. It ensures that data management practices are not operating in silos but are harmoniously aligned and integrated with business goals, regulatory requirements, and ethical standards. In essence, corporate data governance acts as a bridge between data management and corporate strategy, ensuring that every data-related activity contributes to the organization’s mission and objectives. ... In the digital age, data is a critical asset that can drive innovation, efficiency, and competitive advantage. However, without proper governance, data initiatives can become disjointed, risky, and misaligned with corporate goals. Corporate data governance ensures that data management practices are strategically integrated with the organization’s mission, enabling businesses to leverage data confidently and effectively. By focusing on alignment, organizations can make better decisions, respond swiftly to market changes, and build stronger relationships with customers. 


What is an IT consultant? Roles, types, salaries, and how to become one

Because technology is continuously changing, IT consultants can provide clients with the latest information about new technologies as they become available, recommending implementation strategies based on their clients’ needs. As a result, for IT consultants, keeping the pulse of the technology market is essential. “Being a successful IT consultant requires knowing how to walk in the shoes of your IT clients and their business leaders,” says Scott Buchholz, CTO of the government and public services sector practice at consulting firm Deloitte. A consultant’s job is to assess the whole situation, the challenges, and the opportunities at an organization, Buchholz says. As an outsider, the consultant can see things clients can’t. ... “We’re seeing the most in-demand types of consultants being those who specialize in cybersecurity and digital transformation, largely due to increased reliance on remote work and increased risk of cyberattacks,” he says. In addition, consultants with program management skills are valuable for supporting technology projects, assessing technology strategies, and helping organizations compare and make informed decisions about their technology investments, Farnsworth says.


Blockchain + AI: Decentralized Machine Learning Platforms Changing the Game

Tech giants with vast computing resources and proprietary datasets have long dominated traditional AI development. Companies like Google, Amazon, and Microsoft have maintained a virtual monopoly on advanced AI capabilities, creating a significant barrier to entry for smaller players and independent researchers. However, the introduction of blockchain technology and cryptocurrency incentives is rapidly changing this paradigm. Decentralized machine learning platforms leverage blockchain's distributed nature to create vast networks of computing power. These networks function like a global supercomputer, where participants can contribute their unused computing resources in exchange for cryptocurrency tokens. ... The technical architecture of these platforms typically consists of several key components. Smart contracts manage the distribution of computational tasks and token rewards, ensuring transparent and automatic execution of agreements between parties. Distributed storage solutions like IPFS (InterPlanetary File System) handle the massive datasets required for AI training, while blockchain networks maintain an immutable record of transactions and model provenance.


DDoS Attacks Surge as Africa Expands Its Digital Footprint

A larger attack surface, however, is not the only reason for the increased DDoS activity in Africa and the Middle East, Hummel says. "Geopolitical tensions in these regions are also fueling a surge in hacktivist activity as real-world political disputes spill over into the digital world," he says. "Unfortunately, hacktivists often target critical infrastructure like government services, utilities, and banks to cause maximum disruption." And DDoS attacks are by no means the only manifestation of the new threats that organizations in Africa are having to contend with as they broaden their digital footprint. ... Attacks on critical infrastructure and financially motived attacks by organized crime are other looming concerns. In the center's assessment, Africa's government networks and networks belonging to the military, banking, and telecom sectors are all vulnerable to disruptive cyberattacks. Exacerbating the concern is the relatively high potential for cyber incidents resulting from negligence and accidents. Organized crime gangs — the scourge of organizations in the US, Europe, and other parts of the world, present an emerging threat to organizations in Africa, the Center has assessed. 


Optimizing AI Workflows for Hybrid IT Environments

Hybrid IT offers flexibility by combining the scalability of the cloud with the control of on-premises resources, allowing companies to allocate their resources more precisely. However, this setup also introduces complexity. Managing data flow, ensuring security, and maintaining operational efficiency across such a blended environment can become an overwhelming task if not addressed strategically. To manage AI workflows effectively in this kind of setup, businesses must focus on harmonizing infrastructure and resources. ... Performance optimization is crucial when running AI workloads across hybrid environments. This requires real-time monitoring of both on-premises and cloud systems to identify bottlenecks and inefficiencies. Implementing performance management tools allows for end-to-end visibility of AI workflows, enabling teams to proactively address performance issues before they escalate. ... Scalability also supports agility, which is crucial for businesses that need to grow and iterate on AI models frequently. Cloud-based services, in particular, allow teams to experiment and test AI models without being constrained by on-premises hardware limitations. This flexibility is essential for staying competitive in fields where AI innovation happens rapidly.


The Cloud Back-Flip

Cloud repatriation is driven by various factors, including high cloud bills, hidden costs, complexity, data sovereignty, and the need for greater data control. In markets like India—and globally—these factors are all relevant today, points out Vishal Kamani – Cloud Business Head, Kyndryl India. “Currently, rising cloud costs and complexity are part of the ‘learning curve’ for enterprises transitioning to cloud operations.” ... While cloud repatriation is not an alien concept anymore, such reverse migration back to on-premises data centres is seen happening only in organisations that are technology-driven and have deep tech expertise, observes Gaurang Pandya, Director, Deloitte India. “This involves them focusing back on the basics of IT infrastructure which does need a high number of skilled employees. The major driver for such reverse migration is increasing cloud prices and performance requirements. In an era of edge computing and 5G, each end system has now been equipped with much more computing resources than it ever had. This increases their expectations from various service providers.” Money is a big reason too- especially when you don’t know where is it going.


Why Great Programmers fail at Engineering

Being a good programmer is about mastering the details — syntax, algorithms, and efficiency. But being a great engineer? That’s about seeing the bigger picture: understanding systems, designing for scale, collaborating with teams, and ultimately creating software that not only works but excels in the messy, ever-changing real world. ... Good programmers focus on mastering their tools — languages, libraries, and frameworks — and take pride in crafting solutions that are both functional and beautiful. They are the “builders” who bring ideas to life one line of code at a time. ... Software engineering requires a keen understanding of design principles and system architecture. Great code in a poorly designed system is like building a solid wall in a crumbling house — it doesn’t matter how good it looks if the foundation is flawed. Many programmers struggle to:Design systems for scalability and maintainability. Think in terms of trade-offs, such as performance vs. development speed. Plan for edge cases and future growth. Software engineering is as much about people as it is about code. Great engineers collaborate with teams, communicate ideas clearly, and balance stakeholder expectations. ... Programming success is often measured by how well the code runs, but engineering success is about how well the system solves a real-world problem.



Quote for the day:

"Ambition is the path to success. Persistence is the vehicle you arrive in." -- Bill Bradley