Daily Tech Digest - July 23, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson


AI in customer communication: the opportunities and risks SMBs can’t ignore

To build consumer trust, businesses must demonstrate that AI genuinely improves the customer experience, especially by enhancing the quality, relevance and reliability of communication. With concerns around data misuse and inaccuracy, businesses need to clearly explain how AI supports secure, accurate and personalized interactions, not just internally but in ways customers can understand and see. AI should be positioned as an enabler of human service, taking care of routine tasks so employees can focus on complex, sensitive or high-value customer needs. A key part of gaining long-term trust is transparency around data. Businesses must clearly communicate how customer information is handled securely and show that AI is being used responsibly and with care. This could include clearly labelling AI-generated communications such as emails or text messages, or proactively informing customers about what data is being used and for what purpose.  ... As conversations move beyond why AI should be used to how it must be used responsibly and effectively, companies have entered a make-or-break “audition phase” for AI. In customer communications, businesses can no longer afford to just talk about AI’s benefits, they must prove them by demonstrating how it enhances quality, security, and personalization.


The Expiring Trust Model: CISOs Must Rethink PKI in the Era of Short-Lived Certificates and Machine Identity

While the risk associated with certificates applies to all companies, it is a greater challenge for businesses operating in regulated sectors such as healthcare, where certificates must often be tied to national digital identity systems. In several countries, healthcare providers and services are now required to issue certificates bound to a National Health Identifier (NHI). These certificates are used for authentication, e-signature and encryption in health data exchanges and must adhere to complex issuance workflows, usage constraints and revocation processes mandated by government frameworks. Managing these certificates alongside public TLS certificates introduces operational complexity that few legacy PKI solutions were designed to handle in today’s dynamic and cloud-native environments. ... The urgency of this mandate is heightened by the impending cryptographic shift driven by the rise of quantum computing. Transitioning to post-quantum cryptography (PQC) will require organizations to implement new algorithms quickly and securely. Frequent certificate renewal cycles, which once seemed a burden, could now become a strategic advantage. When managed through automated and agile certificate lifecycle management, these renewals provide the flexibility to rapidly replace compromised keys, rotate certificate authorities or deploy quantum-safe algorithms as they become standardized.


The CISO code of conduct: Ditch the ego, lead for real

The problem doesn’t stop at vendor interactions. It shows up inside their teams, too. Many CISOs don’t build leadership pipelines; they build echo chambers. They hire people who won’t challenge them. They micromanage strategy. They hoard influence. And they act surprised when innovation dries up or when great people leave. As Jadee Hanson, CISO at Vanta, put it, “Ego builds walls. True leadership builds trust. The best CISOs know the difference.” That distinction matters, especially when your team’s success depends on your ability to listen, adapt, and share the stage. ... Security isn’t just a technical function anymore. It’s a leadership discipline. And that means we need more than frameworks and certifications; we need a shared understanding of how CISOs should show up. Internally, externally, in boardrooms, and in the broader community. That’s why I’m publishing this. Not because I have all the answers, but because the profession needs a new baseline. A new set of expectations. A standard we can hold ourselves, and each other, to. Not about compliance. About conduct. About how we lead. What follows is the CISO Code of Conduct. It’s not a checklist, but a mindset. If you recognize yourself in it, good. If you don’t, maybe it’s time to ask why. Either way, this is the bar. Let’s hold it. ... A lot of people in this space are trying to do the right thing. But there are also a lot of people hiding behind a title.


Phishing simulations: What works and what doesn’t

Researchers conducted a study on the real-world effectiveness of common phishing training methods. They found that the absolute difference in failure rates between trained and untrained users was small across various types of training content. However, we should take this with caution, as the study was conducted within a single healthcare organization and focused only on click rates as the measure of success or failure. It doesn’t capture the full picture. Matt Linton, Google’s security manager, said phishing tests are outdated and often cause more frustration among employees than actually improving their security habits. ... For any training program to work, you first need to understand your organization’s risk. Which employees are most at risk? What do they already know about phishing? Next, work closely with your IT or security teams to create phishing tests that match current threats. Tell employees what to expect. Explain why these tests matter and how they help stop problems. Don’t play the blame game. If someone fails a test, treat it as a chance to learn, not to punish. When you do this, employees are less likely to hide mistakes or avoid reporting phishing emails. When picking a vendor, focus on content and realistic simulations. The system should be easy to use and provide helpful reports.


Reclaiming Control: How Enterprises Can Fix Broken Security Operations

Asset management is critical to the success of the security operations function. In order to properly defend assets, I first and foremost need to know about them and be able to manage them. This includes applying policies, controls, and being able to identify assets and their locations when necessary, of course. With the move to hybrid and multi-cloud, asset management is much more difficult than it used to be. ... Visibility enables another key component of security operations – telemetry collection. Without the proper logging, eventing, and alerting, I can’t detect, investigate, analyze, respond to, and mitigate security incidents. Security operations simply cannot operate without telemetry, and the hybrid and multi-cloud world has made telemetry collection much more difficult than it used to be. ... If a security incident is serious enough, there will need to be a formal incident response. This will involve significant planning, coordination with a variety of stakeholders, regular communications, structured reporting, ongoing analysis, and a post-incident evaluation once the response is wrapped up. All of these steps are complicated by hybrid and multi-cloud environments, if not made impossible altogether. The security operations team will not be able to properly engage in incident response if they are lacking the above capabilities, and having a complex environment is not an excuse.


Legacy No More: How Generative AI is Powering the Next Wave of Application Modernization in India

Choosing the right approach to modernise your legacy systems is a task. Generative AI helps overcome the challenges faced in legacy systems and accelerates modernization. For example, it can be used to understand how legacy systems function through detailed business requirements. The resulting documents can be used to build new systems on the cloud in the second phase. This can make the process cheaper, too, and thus easier to get business cases approved. Additionally, generative AI can help create training documents for the current system if the organization wants to continue using its mainframes. In one example, generative AI might turn business models into microservices, API contracts, and database schemas ready for cloud-native inclusion. ... You need to have a holistic assessment of your existing system to implement generative AI effectively. Leaders must assess obsolete modules, interdependencies, data schemas, and throughput constraints to pinpoint high-impact targets and establish concrete modernization goals. Revamping legacy applications with generative AI starts with a clear understanding of the existing system. Organizations must conduct a thorough evaluation, mapping performance bottlenecks, obsolete modules, entanglements, and intricacies of the data flow, to create a modernization roadmap.


A Changing of the Guard in DevOps

Asimov, a newcomer in the space, is taking a novel approach — but addressing a challenge that’s as old as DevOps itself. According to the article, the team behind Asimov has zeroed in on a major time sink for developers: The cognitive load of understanding deployment environments and platform intricacies. ... What makes Asimov stand out is not just its AI capability but its user-centric focus. This isn’t another auto-coder. This is about easing the mental burden, helping engineers think less about YAML files and more about solving business problems. It’s a fresh coat of paint on a house we’ve been renovating for over a decade. ... Whether it’s a new player like Asimov or stalwarts like GitLab and Harness, the pattern is clear: AI is being applied to the same fundamental problems that have shaped DevOps from the beginning. The goals haven’t changed — faster cycles, fewer errors, happier teams — but the tools are evolving. Sure, there’s some real innovation here. Asimov’s knowledge-centric approach feels genuinely new. GitLab’s AI agents offer a logical evolution of their existing ecosystem. Harness’s plain-language chat interface lowers the barrier to entry. These aren’t just gimmicks. But the bigger story is the convergence. AI is no longer an outlier or an optional add-on — it’s becoming foundational. And as these solutions mature, we’re likely to see less hype and more impact.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. ... Cyber resilience is now essential. With ransomware that can encrypt systems in minutes, the ability to recover quickly and effectively is a business imperative. Therefore, companies must develop an adaptive, layered strategy that evolves with emerging threats and aligns with their unique environment, infrastructure and risk tolerance. To effectively prepare for the next threat, technology leaders must balance technical sophistication with operational discipline as the best defence is not solely a hardened perimeter, it’s also having a recovery plan that works. Today, companies cannot afford to choose between data protection and cyber resilience, they must master both.


Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

The findings challenge the prevailing industry wisdom that more computational resources devoted to reasoning will consistently improve AI performance. Major AI companies have invested heavily in “test-time compute” — allowing models more processing time to work through complex problems — as a key strategy for enhancing capabilities. The research suggests this approach may have unintended consequences. “While test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns,” the authors conclude. For enterprise decision-makers, the implications are significant. Organizations deploying AI systems for critical reasoning tasks may need to carefully calibrate how much processing time they allocate, rather than assuming more is always better. ... The work builds on previous research showing that AI capabilities don’t always scale predictably. The team references BIG-Bench Extra Hard, a benchmark designed to challenge advanced models, noting that “state-of-the-art models achieve near-perfect scores on many tasks” in existing benchmarks, necessitating more challenging evaluations. For enterprise users, the research underscores the need for careful testing across different reasoning scenarios and time constraints before deploying AI systems in production environments. 


How to Advance from SOC Manager to CISO?

Strategic thinking demands a firm grip on the organization's core operations, particularly how it generates revenue and its key value streams. This perspective allows security professionals to align their efforts with business objectives, rather than operating in isolation. ... This is related to strategic thinking but emphasizes knowledge of risk management and finance. Security leaders must factor in financial impacts to justify security investments and manage risks effectively. Balancing security measures with user experience and system availability is another critical aspect. If security policies are too strict, productivity can suffer; if they're too permissive, the company can be exposed to threats. ... Effective communication is vital for translating technical details into language senior stakeholders can grasp and act upon. This means avoiding jargon and abbreviations to convey information in a simplistic manner that resonates with multiple stakeholders, including executives who may not have a deep technical background. Communicating the impact of security initiatives in clear, concise language ensures decisions are well-informed and support company goals. ... You will have to ensure technical services meet business requirements, particularly in managing service delivery, implementing change, and resolving issues. All of this is essential for a secure and efficient IT infrastructure.

Daily Tech Digest - July 22, 2025


Quote for the day:

“Being responsible sometimes means pissing people off.” -- Colin Powell


It might be time for IT to consider AI models that don’t steal

One option that has many pros and cons is to use genAI models that explicitly avoid training on any information that is legally dicey. There are a handful of university-led initiatives that say they try to limit model training data to information that is legally in the clear, such as open source or public domain material. ... “Is it practical to replace the leading models of today right now? No. But that is not the point. This level of quality was built on just 32 ethical data sources. There are millions more that can be used,” Wiggins wrote in response to a reader’s comment on his post. “This is a baseline that proves that Big AI lied. Efforts are underway to add more data that will bring it up to more competitive levels. It is not there yet.” Still, enterprises are investing in and planning for genAI deployments for the long term, and they may find in time that ethically sourced models deliver both safety and performance. ... Tipping the scales in the other direction is the big model makers’ promises of indemnification. Some genAI vendors have said they will cover the legal costs for customers who are sued over content produced by their models. “If the model provides indemnification, this is what enterprises should shoot for,” Moor’s Andersen said. 


The unique, mathematical shortcuts language models use to predict dynamic scenarios

One go-to pattern the team observed, called the “Associative Algorithm,” essentially organizes nearby steps into groups and then calculates a final guess. You can think of this process as being structured like a tree, where the initial numerical arrangement is the “root.” As you move up the tree, adjacent steps are grouped into different branches and multiplied together. At the top of the tree is the final combination of numbers, computed by multiplying each resulting sequence on the branches together. The other way language models guessed the final permutation was through a crafty mechanism called the “Parity-Associative Algorithm,” which essentially whittles down options before grouping them. It determines whether the final arrangement is the result of an even or odd number of rearrangements of individual digits. ... “These behaviors tell us that transformers perform simulation by associative scan. Instead of following state changes step-by-step, the models organize them into hierarchies,” says MIT PhD student and CSAIL affiliate Belinda Li SM ’23, a lead author on the paper. “How do we encourage transformers to learn better state tracking? Instead of imposing that these systems form inferences about data in a human-like, sequential way, perhaps we should cater to the approaches they naturally use when tracking state changes.”


Role of AI in fortifying cryptocurrency security

In the rapidly expanding realm of Decentralised Finance (DeFi), AI will play a critical role in optimising complex lending, borrowing, and trading protocols. AI can intelligently manage liquidity pools, optimise yield farming strategies for better returns and reduced impermanent loss, and even identify subtle arbitrage opportunities across various platforms. Crucially, AI will also be vital in identifying and mitigating novel types of exploits that are unique to the intricate and interconnected world of DeFi. Looking further ahead, AI will be crucial in developing Quantum-Resistant Cryptography. As quantum computing advances, it poses a theoretical threat to the underlying cryptographic methods that secure current blockchain networks. AI can significantly accelerate the research and development of “post-quantum cryptography” (PQC) algorithms, which are designed to withstand the immense computational power of future quantum computers. AI can also be used to simulate quantum attacks, rigorously testing existing and new cryptographic designs for vulnerabilities. Finally, the concept of Autonomous Regulation could redefine oversight in the crypto space. Instead of traditional, reactive regulatory approaches, AI-driven frameworks could provide real-time, proactive oversight without stifling innovation. 


From Visibility to Action: Why CTEM Is Essential for Modern Cybersecurity Resilience

CTEM shifts the focus from managing IT vulnerabilities in isolation to managing exposure in collaboration, something that’s far more aligned with the operational priorities of today’s organizations. Where traditional approaches center around known vulnerabilities and technical severity, CTEM introduces a more business-driven lens. It demands ongoing visibility, context-rich prioritization, and a tighter alignment between security efforts and organizational impact. In doing so, it moves the conversation from “What’s vulnerable?” to “What actually matters right now?” – a far more useful question when resilience is on the line. What makes CTEM particularly relevant beyond security teams is its emphasis on continuous alignment between exposure data and operational decision-making. This makes it valuable not just for threat reduction, but for supporting broader resilience efforts, ensuring resources are directed toward the exposures most likely to disrupt critical operations. It also complements, rather than replaces, existing practices like attack surface management (ASM). CTEM builds on these foundations with more structured prioritization, validation, and mobilization, turning visibility into actionable risk reduction. 


Driving Platform Adoption: Community Is Your Value

Remember that in a Platform as a Product approach, developers are your customers. If they don’t know what’s available, how to use it or what’s coming next, they’ll find workarounds. These conferences and speaker series are a way to keep developers engaged, improve adoption and ensure the platform stays relevant.There’s a human side to this, too often left out of focusing on “the business value” and outcomes in corporate-land: just having a friendly community of humans who like to spend time with each other and learn. ... Successful platform teams have active platform advocacy. This requires at least one person working full time to essentially build empathy with your users by working with and listening to the people who use your platforms. You may start with just one platform advocate who visits with developer teams, listening for feedback while teaching them how to use the platform and associated methodologies. The advocate acts as both a councilor and delegate for your developers.  ... The journey to successful platform adoption is more than just communicating technical prowess. Embracing systematic approaches to platform marketing that include clear messaging and positioning based on customers’ needs and a strong brand ethos is the key to communicating the value of your platform.


9 AI development skills tech companies want

“It’s not enough to know how a transformer model works; what matters is knowing when and why to use AI to drive business outcomes,” says Scott Weller, CTO of AI-powered credit risk analysis platform EnFi. “Developers need to understand the tradeoffs between heuristics, traditional software, and machine learning, as well as how to embed AI in workflows in ways that are practical, measurable, and responsible.” ... “In AI-first systems, data is the product,” Weller says. “Developers must be comfortable acquiring, cleaning, labeling, and analyzing data, because poor data hygiene leads to poor model performance.” ... AI safety and reliability engineering “looks at the zero-tolerance safety environment of factory operations, where AI failures could cause safety incidents or production shutdowns,” Miller says. To ensure the trust of its customers, IFS needs developers who can build comprehensive monitoring systems to detect when AI predictions become unreliable and implement automated rollback mechanisms to traditional control methods when needed, Miller says. ... “With the rapid growth of large language models, developers now require a deep understanding of prompt design, effective management of context windows, and seamless integration with LLM APIs—skills that extend well beyond basic ChatGPT interactions,” Tupe says.


Why AI-Driven Logistics and Supply Chains Need Resilient, Always-On Networks

Something worth noting about increased AI usage in supply chains is that as AI-enabled systems become more complex, they also become more delicate, which increases the potential for outages. Something as simple as a single misconfiguration or unintentional interaction between automated security gates can lead to a network outage, preventing supply chain personnel from accessing critical AI applications. During an outage, AI clusters (interconnected GPU/TPU nodes used for training and inference) can also become unavailable. .. Businesses must increase network resiliency to ensure their supply chain and logistics teams always have access to key AI applications, even during network outages and other disruptions. One approach that companies can take to strengthen network resilience is to implement purpose-built infrastructure like out of band (OOB) management. With OOB management, network administrators can separate and containerize functions of the management plane, allowing it to operate freely from the primary in-band network. This secondary network acts as an always-available, independent, dedicated channel that administrators can use to remotely access, manage, and troubleshoot network infrastructure.


From architecture to AI: Building future-ready data centers

In some cases, the pace of change is so fast that buildings are being retrofitted even as they are being constructed. Once CPUs are installed, O'Rourke has observed data center owners opting to upgrade racks row by row, rather than converting the entire facility to liquid cooling at once – largely because the building wasn’t originally designed to support higher-density racks. To accommodate this reality, Tate carries out in-row upgrades by providing specialized structures to mount manifolds, which distribute coolant from air-cooled chillers throughout the data halls. “Our role is to support the physical distribution of that cooling infrastructure,” explains O'Rourke. “Manifold systems can’t be supported by existing ceilings or hot aisle containment due to weight limits, so we’ve developed floor-mounted frameworks to hold them.” He adds: “GPU racks also can’t replace all CPU racks one-to-one, as the building structure often can’t support the added load. Instead, GPUs must be strategically placed, and we’ve created solutions to support these selective upgrades.” By designing manifold systems with actuators that integrate with the building management system (BMS), along with compatible hot aisle containment and ceiling structures, Tate has developed a seamless, integrated solution for the white space. 


Weaving reality or warping it? The personalization trap in AI systems

At first, personalization was a way to improve “stickiness” by keeping users engaged longer, returning more often and interacting more deeply with a site or service. Recommendation engines, tailored ads and curated feeds were all designed to keep our attention just a little longer, perhaps to entertain but often to move us to purchase a product. But over time, the goal has expanded. Personalization is no longer just about what holds us. It is what it knows about each of us, the dynamic graph of our preferences, beliefs and behaviors that becomes more refined with every interaction. Today’s AI systems do not merely predict our preferences. They aim to create a bond through highly personalized interactions and responses, creating a sense that the AI system understands and cares about the user and supports their uniqueness. The tone of a chatbot, the pacing of a reply and the emotional valence of a suggestion are calibrated not only for efficiency but for resonance, pointing toward a more helpful era of technology. It should not be surprising that some people have even fallen in love and married their bots. The machine adapts not just to what we click on, but to who we appear to be. It reflects us back to ourselves in ways that feel intimate, even empathic. 


Microsoft Rushes to Stop Hackers from Wreaking Global Havoc

Multiple different hackers are launching attacks through the Microsoft vulnerability, according to representatives of two cybersecurity firms, CrowdStrike Holdings, Inc. and Google's Mandiant Consulting. Hackers have already used the flaw to break into the systems of national governments in Europe and the Middle East, according to a person familiar with the matter. In the US, they've accessed government systems, including ones belonging to the US Department of Education, Florida's Department of Revenue and the Rhode Island General Assembly, said the person, who spoke on condition that they not be identified discussing the sensitive information. ... The breaches have drawn new scrutiny to Microsoft's efforts to shore up its cybersecurity after a series of high-profile failures. The firm has hired executives from places like the US government and holds weekly meetings with senior executives to make its software more resilient. The company's tech has been subject to several widespread and damaging hacks in recent years, and a 2024 US government report described the company's security culture as in need of urgent reforms. ... "There were ways around the patches," which enabled hackers to break into SharePoint servers by tapping into similar vulnerabilities, said Bernard. "That allowed these attacks to happen." 

Daily Tech Digest - July 21, 2025


Quote for the day:

"Absolute identity with one's cause is the first and great condition of successful leadership." -- Woodrow Wilson


Is AI here to take or redefine your cybersecurity role?

Unlike Thibodeaux, Watson believes the level-one SOC analyst role “is going to be eradicated” by AI eventually. But he agrees with Thibodeaux that AI will move the table stakes forward on the skills needed to land a starter job in cyber. “The thing that will be cannibalized first is the sort of entry-level basic repeatable tasks, the things that people traditionally might have cut their teeth on in order to sort of progress to the next level. Therefore, the skill requirement to get a role in cybersecurity will be higher than what it has been traditionally,” says Watson. To help cyber professionals attain AI skills, CompTIA is developing a new certification program called SecAI. The course will target cyber people who already have three to four years of experience in a core cybersecurity job. The curriculum will include practical AI skills to proactively combat emerging cyber threats, integrating AI into security operations, defending against AI-driven attacks, and compliance for AI ethics and governance standards. ... As artificial intelligence takes over a rising number of technical cybersecurity tasks, Watson says one of the best ways security workers can boost their employment value is by sharpening their human skills like business literacy and communication: “The role is shifting to be one of partnering and advising because a lot of the technology is doing the monitoring, triaging, quarantining and so on.”


5 tips for building foundation models for AI

"We have to be mindful that, when it comes to training these models, we're doing it purposefully, because you can waste a lot of cycles on the exercise of learning," he said. "The execution of these models takes far less energy and resources than the actual training." OS usually feeds training data to its models in chunks. "Building up the label data takes quite a lot of time," he said. "You have to curate data across the country with a wide variety of classes that you're trying to learn from, so a different mix between urban and rural, and more." The organisation first builds a small model that uses several hundred examples. This approach helps to constrain costs and ensures OS is headed in the right direction. "Then we slowly build up that labelled set," Jethwa said. "I think we're now into the hundreds of thousands of labelled examples. Typically, these models are trained with millions of labelled datasets." While the organization's models are smaller, the results are impressive. "We're already outperforming the existing models that are out there from the large providers because those models are trained on a wider variety of images," he said. "The models might solve a wider variety of problems, but, for our specific domain, we outperform those models, even at a smaller scale."


Reduce, re-use, be frugal with AI and data

By being more selective with the data included in language models, businesses can better control their carbon emissions, limiting energy to be spent on the most important resources. In healthcare, for example, separating the most up-to-date medical information and guidance from the rest of the information on that topic will mean safer, more reliable and faster responses to patient treatment. ... Frugal AI means adopting an intelligent approach to data that focuses on using the most valuable information only. When businesses have a greater understanding of their data, how to label it, identify it and which teams are responsible for its deletion, then the storage of single use data can be significantly reduced. Only then can frugal AI systems be put in place, allowing businesses to adopt a resource aware and efficient approach to both their data consumption and AI usage. It’s important to stress here though that frugal AI doesn’t mean that the end results are lesser or of a reduced impact of technology, it means that the data that goes into AI is concentrated, smaller but just as impactful. Think of it like making a drink with extra concentrated squash. Frugal AI is that extra concentrate squash that puts data efficiency, consideration and strategy at the centre of an organisation’s AI ambitions.


Cyber turbulence ahead as airlines strap in for a security crisis

Although organizations have acknowledged the need to boost spending, progress remains to be made and new measures adopted. Legacy OT systems, which often lack security features such as automated patching and built-in encryption, should be addressed as a top priority. Although upgrading these systems can be costly, it is essential to prevent further disruptions and vulnerabilities. Mapping the aviation supply chain helps identify all key partners, which is important for conducting security audits and enforcing contractual cybersecurity requirements. This should be reinforced with multi-layered perimeter defenses, including encryption, firewalls, and intrusion detection systems, alongside zero-trust network segmentation to minimize the risk of attackers moving laterally within networks. Companies should implement real-time threat monitoring and response by deploying intrusion detection systems, centralizing analysis with SIEM, and maintaining a regularly tested incident response plan to identify, contain, and mitigate cyberattacks. ... One of the most important steps is to train all staff, including pilots and ground crews, to recognize scams. Since recent security breaches have mostly relied on social engineering tactics, this type of training is essential. A single phone call or a convincing email can be enough to trigger a data breach. 


What Does It Mean to Be Data-Driven?

A data-driven organization understands the value of its data and the best ways to capitalize on that value. Its data assets are aligned with its goals and the processes in place to achieve those goals. Protecting the company’s data assets requires incorporating governance practices to ensure managers and employees abide by privacy, security, and integrity guidelines. In addition to proper data governance, the challenges to implementing a data-driven infrastructure for business processes are data quality and integrity, data integration, talent acquisition, and change management. ... To ensure the success of their increasingly critical data initiatives, organizations look to the characteristics that led to effective adoption of data-driven programs at other companies. Management services firm KPMG identifies four key characteristics of successful data-driven initiatives: leadership involvement, investments in digital literacy, seamless access to data assets, and promotion and monitoring. ... While data-as-a-service (DaaS) emphasizes the sale of external data, data as a product (DaaP) considers all of a company’s data and the mechanisms in place for moving and storing the data as a product that internal operations rely on. The data team becomes a “vendor” serving “customers” throughout the organization.


AI Needs a Firewall and Cloud Needs a Rethink

Hyperscalers dominate most of enterprise IT today, and few are willing to challenge the status quo of cloud economics, artificial intelligence infrastructure and cybersecurity architectures. But Tom Leighton, co-founder and CEO of Akamai, does just that. He argues that the cloud has become bloated, expensive and overly centralized. The internet needs a new kind of infrastructure that is distributed, secure by design and optimized for performance at the edge, Leighton told Information Security Media Group. From edge-native AI inference and API security to the world's first firewall for artificial intelligence, Akamai is no longer just delivering content - it's redesigning the future. ... Among the most notable developments Leighton discussed was a new product category: an AI firewall. "People are training models on sensitive data and then exposing them to the public. That creates a new attack surface," Leighton said. "AI hallucinates. You never know what it's going to do. And the bad guys have figured out how to trick models into leaking data or doing bad things." Akamai's AI firewall monitors prompts and responses to prevent malicious prompts from manipulating the model and to avoid leaking sensitive data. "It can be implemented on-premises, in the cloud or within Akamai's platform, providing flexibility based on customer preference. 


Human and machine: Rediscovering our humanity in the age of AI

In an era defined by the rapid advancement of AI, machines are increasingly capable of tasks once considered uniquely human. ... Ethical decision-making, relationship building and empathy have been identified as the most valuable, both in our present reality and in the AI-driven future. ... As we navigate this era of AI, we must remember that technology is a tool, not a replacement for humanity. By embracing our capacity for creativity, connection and empathy, we can ensure that AI serves to enhance our humanity, not diminish it. This means accepting that preserving our humanness sometimes requires assistance. It means investing in education and training that fosters critical thinking, problem-solving and emotional intelligence. It means creating workplaces that value human connection and collaboration, where employees feel supported and empowered to bring their whole selves to work. And it means fostering a culture that celebrates creativity, innovation and the pursuit of knowledge. At a time when seven out of every ten companies are already using AI in at least one business function, let us embrace the challenge of this new era with both optimism and intentionality. Let us use AI to build a better future for ourselves and for generations to come – a future where technology serves humanity, and where every individual has the opportunity to thrive.


‘Interoperable but not identical’: applying ID standards across diverse communities

Exchanging knowledge and experiences with identity systems to improve future ID projects is central to the concept of ID4Africa’s mission. At this year’s ID4Africa AGM in Addis Ababa, Ethiopia, a tension was more evident than ever before between the quest for transferable insights and replicable successes and the uniqueness of each African nation. Thales Cybersecurity and Digital Identity Field Marketing Director for the Middle East and Africa Jean Lindner wrote in an emailed response to questions from Biometric Update following the event that the mix of attendees reflected that “every African country has its own diverse history or development maturity and therefore unique legacy identity systems, with different constraints. Let us recognize here there is no unique quick-fix to country-specific hurdles,” he says. The lessons of one country can only benefit another to the extent that common ground is identified. The development of the concept of digital public infrastructure has mapped out some common ground, but standards and collaborative organizations have a major role to play. Unfortunately, Stéphanie de Labriolle, executive director services at the Secure Identity Alliance says “the widespread lack of clarity around standards and what compliance truly entails” was striking at this year’s ID4Africa AGM.


The Race to Shut Hackers out of IoT Networks

Considered among the weakest links in enterprise networks, IoT devices are used across industries to perform critical tasks at a rapid rate. An estimated 57% of deployed units "are susceptible to medium- or high-severity attacks," according to research from security vendor Palo Alto Networks. IoT units are inherently vulnerable to security attacks, and enterprises are typically responsible for protecting against threats. Additionally, the IoT industry hasn't settled on standardized security, as time to market is sometimes a priority over standards. ... 3GPP developed RedCap to provide a viable option for enterprises seeking a higher-performance, feature-rich 5G alternative to traditional IoT connectivity options such as low-power WANs (LPWANs). LPWANs are traditionally used to transmit limited data over low-speed cellular links at a low cost. In contrast, RedCap offers moderate bandwidth and enhanced features for more demanding use cases, such as video surveillance cameras, industrial control systems in manufacturing and smart building infrastructure. ... From a security standpoint, RedCap inherits strong capabilities in 5G, such as authentication, encryption and integrity protection. It can also be supplemented at application and device levels for a multilayered security approach.


Architecting the MVP in the Age of AI

A key aspect of architecting an MVP is forming and testing hypotheses about how the system will meet its QARs. Understanding and prioritizing these QARs is not an easy task, especially for teams without a lot of architecture experience. AI can help when teams provide context by describing the QARs that the system must satisfy in a prompt and asking the LLM to suggest related requirements. The LLM may suggest additional QARs that the team may have overlooked. For example, if performance, security, and usability are the top 3 QARs that a team is considering, an LLM may suggest looking at scalability and resilience as well. This can be especially helpful for people who are new to software architecture. ... Sometimes validating the AI’s results may require more skills than would be required to create the solution from scratch, just as is sometimes the case when seeing someone else’s code and realizing that it’s better than what you would have developed on your own. This can be an effective way to improve developers’ skills, provided that the code is good. AI can also help you find and fix bugs in your code that you may miss. Beyond simple code inspection, experimentation provides a means of validating the results produced by AI. In fact, experimentation is the only real way to validate it, as some researchers have discovered.

Daily Tech Digest - July 20, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


Lean Agents: The Agile Workforce of Agentic AI

Organizations are tired of gold‑plated mega systems that promise everything and deliver chaos. Enter frameworks like AutoGen and LangGraph, alongside protocols such as MCP; all enabling Lean Agents to be spun up on-demand, plug into APIs, execute a defined task, then quietly retire. This is a radical departure from heavyweight models that stay online indefinitely, consuming compute cycles, budget, and attention. ... Lean Agents are purpose-built AI workers; minimal in design, maximally efficient in function. Think of them as stateless or scoped-memory micro-agents: they wake when triggered, perform a discrete task like summarizing an RFP clause or flagging anomalies in payments and then gracefully exit, freeing resources and eliminating runtime drag. Lean Agents are to AI what Lambda functions are to code: ephemeral, single-purpose, and cloud-native. They may hold just enough context to operate reliably but otherwise avoid persistent state that bloats memory and complicates governance. ... From technology standpoint, combined with the emerging Model‑Context Protocol (MCP) give engineering teams the scaffolding to create discoverable, policy‑aware agent meshes. Lean Agents transform AI from a monolithic “brain in the cloud” into an elastic workforce that can be budgeted, secured, and reasoned about like any other microservice.


Cloud Repatriation Is Harder Than You Think

Repatriation is not simply a reverse lift-and-shift process. Workloads that have developed in the cloud often have specific architectural dependencies that are not present in on-premises environments. These dependencies can include managed services like identity providers, autoscaling groups, proprietary storage solutions, and serverless components. As a result, moving a workload back on-premises typically requires substantial refactoring and a thorough risk assessment. Untangling these complex layers is more than just a migration; it represents a structural transformation. If the service expectations are not met, repatriated applications may experience poor performance or even fail completely. ... You cannot migrate what you cannot see. Accurate workload planning relies on complete visibility, which includes not only documented assets but also shadow infrastructure, dynamic service relationships, and internal east-west traffic flows. Static tools such as CMDBs or Visio diagrams often fall out of date quickly and fail to capture real-time behavior. These gaps create blind spots during the repatriation process. Application dependency mapping addresses this issue by illustrating how systems truly interact at both the network and application layers. Without this mapping, teams risk disrupting critical connections that may not be evident on paper.


AI Agents Are Creating a New Security Nightmare for Enterprises and Startups

The agentic AI landscape is still in its nascent stages, making it the opportune moment for engineering leaders to establish robust foundational infrastructure. While the technology is rapidly evolving, the core patterns for governance are familiar: Proxies, gateways, policies, and monitoring. Organizations should begin by gaining visibility into where agents are already running autonomously — chatbots, data summarizers, background jobs — and add basic logging. Even simple logs like “Agent X called API Y” are better than nothing. Routing agent traffic through existing proxies or gateways in a reverse mode can eliminate immediate blind spots. Implementing hard limits on timeouts, max retries, and API budgets can prevent runaway costs. While commercial AI gateway solutions are emerging, such as Lunar.dev, teams can start by repurposing existing tools like Envoy, HAProxy, or simple wrappers around LLM APIs to control and observe traffic. Some teams have built minimal “LLM proxies” in days, adding logging, kill switches, and rate limits. Concurrently, defining organization-wide AI policies — such as restricting access to sensitive data or requiring human review for regulated outputs — is crucial, with these policies enforced through the gateway and developer training.


The Evolution of Software Testing in 2025: A Comprehensive Analysis

The testing community has evolved beyond the conventional shift-left and shift-right approaches to embrace what industry leaders term "shift-smart" testing. This holistic strategy recognizes that quality assurance must be embedded throughout the entire software development lifecycle, from initial design concepts through production monitoring and beyond. While shift-left testing continues to emphasize early validation during development phases, shift-right testing has gained equal prominence through its focus on observability, chaos engineering, and real-time production testing. ... Modern testing platforms now provide insights into how testing outcomes relate to user churn rates, release delays, and net promoter scores, enabling organizations to understand the direct business impact of their quality assurance investments. This data-driven approach transforms testing from a technical activity into a business-critical function with measurable value.Artificial intelligence platforms are revolutionizing test prioritization by predicting where failures are most likely to occur, allowing testing teams to focus their efforts on the highest-risk areas. ... Modern testers are increasingly taking on roles as quality coaches, working collaboratively with development teams to improve test design and ensure comprehensive coverage aligned with product vision. 


7 lessons I learned after switching from Google Drive to a home NAS

One of the first things I realized was that a NAS is only as fast as the network it’s sitting on. Even though my NAS had decent specs, file transfers felt sluggish over Wi-Fi. The new drives weren’t at fault, but my old router was proving to be a bottleneck. Once I wired things up and upgraded my router, the difference was night and day. Large files opened like they were local. So, if you’re expecting killer performance, make sure to look out for the network box, because it perhaps matters just as much  ... There was a random blackout at my place, and until then, I hadn’t hooked my NAS to a power backup system. As a result, the NAS shut off mid-transfer without warning. I couldn’t tell if I had just lost a bunch of files or if the hard drives had been damaged too — and that was a fair bit scary. I couldn’t let this happen again, so I decided to connect the NAS to an uninterruptible power supply unit (UPS).  ... I assumed that once I uploaded my files to Google Drive, they were safe. Google would do the tiring job of syncing, duplicating, and mirroring on some faraway data center. But in a self-hosted environment, you are the one responsible for all that. I had to put safety nets in place for possible instances where a drive fails or the NAS dies. My current strategy involves keeping some archived files on a portable SSD, a few important folders synced to the cloud, and some everyday folders on my laptop set up to sync two-way with my NAS.


5 key questions your developers should be asking about MCP

Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair. ... Remote deployment obviously addresses the scaling but opens up a can of worms around transport complexity. The original HTTP+SSE approach was replaced by a March 2025 streamable HTTP update, which tries to reduce complexity by putting everything through a single /messages endpoint. Even so, this isn’t really needed for most companies that are likely to build MCP servers. But here’s the thing: A few months later, support is spotty at best. Some clients still expect the old HTTP+SSE setup, while others work with the new approach — so, if you’re deploying today, you’re probably going to support both. Protocol detection and dual transport support are a must. ... However, the biggest security consideration with MCP is around tool execution itself. Many tools need broad permissions to be useful, which means sweeping scope design is inevitable. Even without a heavy-handed approach, your MCP server may access sensitive data or perform privileged operations


Firmware Vulnerabilities Continue to Plague Supply Chain

"The major problem is that the device market is highly competitive and the vendors [are] competing not only to the time-to-market, but also for the pricing advantages," Matrosov says. "In many instances, some device manufacturers have considered security as an unnecessary additional expense." The complexity of the supply chain is not the only challenge for the developers of firmware and motherboards, says Martin Smolár, a malware researcher with ESET. The complexity of the code is also a major issue, he says. "Few people realize that UEFI firmware is comparable in size and complexity to operating systems — it literally consists of millions of lines of code," he says. ... One practice that hampers security: Vendors will often try to only distribute security fixes under a non-disclosure agreement, leaving many laptop OEMs unaware of potential vulnerabilities in their code. That's the exact situation that left Gigabyte's motherboards with a vulnerable firmware version. Firmware vendor AMI fixed the issues years ago, but the issues have still not propagated out to all the motherboard OEMs. ... Yet, because firmware is always evolving as better and more modern hardware is integrated into motherboards, the toolset also need to be modernized, Cobalt's Ollmann says.


Beyond Pilots: Reinventing Enterprise Operating Models with AI

Historically, AI models required vast volumes of clean, labeled data, making insights slow and costly. Large language models (LLMs) have upended this model, pre-trained on billions of data points and able to synthesize organizational knowledge, market signals, and past decisions to support complex, high-stakes judgment. AI is becoming a powerful engine for revenue generation through hyper-personalization of products and services, dynamic pricing strategies that react to real-time market conditions, and the creation of entirely new service offerings. More significantly, AI is evolving from completing predefined tasks to actively co-creating superior customer experiences through sophisticated conversational commerce platforms and intelligent virtual agents that understand context, nuance, and intent in ways that dramatically enhance engagement and satisfaction. ... In R&D and product development, AI is revolutionizing operating models by enabling faster go-to-market cycles. AI can simulate countless design alternatives, optimize complex supply chains in real time, and co-develop product features based on deep analysis of customer feedback and market trends. These systems can draw from historical R&D successes and failures across industries, accelerating innovation by applying lessons learned from diverse contexts and domains.


Alternative clouds are on the rise

Alt clouds, in their various forms, represent a departure from the “one size fits all” mentality that initially propelled the public cloud explosion. These alternatives to the Big Three prioritize specificity, specialization, and often offer an advantage through locality, control, or workload focus. Private cloud, epitomized by offerings from VMware and others, has found renewed relevance in a world grappling with escalating cloud bills, data sovereignty requirements, and unpredictable performance from shared infrastructure. The old narrative that “everything will run in the public cloud eventually” is being steadily undermined as organizations rediscover the value of dedicated infrastructure, either on-premises or in hosted environments that behave, in almost every respect, like cloud-native services. ... What begins as cost optimization or risk mitigation can quickly become an administrative burden, soaking up engineering time and escalating management costs. Enterprises embracing heterogeneity have no choice but to invest in architects and engineers who are familiar not only with AWS, Azure, or Google, but also with VMware, CoreWeave, a sovereign European platform, or a local MSP’s dashboard. 


Making security and development co-owners of DevSecOps

In my view, DevSecOps should be structured as a shared responsibility model, with ownership but no silos. Security teams must lead from a governance and risk perspective, defining the strategy, standards, and controls. However, true success happens when development teams take ownership of implementing those controls as part of their normal workflow. In my career, especially while leading security operations across highly regulated industries, including finance, telecom, and energy, I’ve found this dual-ownership model most effective. ... However, automation without context becomes dangerous, especially closer to deployment. I’ve led SOC teams that had to intervene because automated security policies blocked deployments over non-exploitable vulnerabilities in third-party libraries. That’s a classic example where automation caused friction without adding value. So the balance is about maturity: automate where findings are high-confidence and easily fixable, but maintain oversight in phases where risk context matters, like release gates, production changes, or threat hunting. ... Tools are often dropped into pipelines without tuning or context, overwhelming developers with irrelevant findings. The result? Fatigue, resistance, and workarounds.

Daily Tech Digest - July 19, 2025


Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks


AI-Driven Threat Hunting: Catching Zero Day Exploits Before They Strike

Cybersecurity has come a long way from the days of simple virus scanners and static firewalls. Signature-based defenses were sufficient to detect known malware during the past era. Zero-day exploits operate as unpredictable threats that traditional security tools fail to detect. The technology sector saw Microsoft and Google rush to fix more than dozens of zero day vulnerabilities which attackers used in the wild during 2023. The consequences reach extreme levels because a single security breach results in major financial losses and immediate destruction of corporate reputation. AI functions as a protective measure that addresses weaknesses in human capabilities and outdated system limitations. The system analyzes enormous amounts of data from network traffic and timestamps and IP logs, and other inputs to detect security risks. ... So how does AI pull this off? It’s all about finding the weird stuff. Network traffic packets follow regular patterns, but zero-day exploits cause packet size fluctuations and timing irregularities. AI detects anomalies by comparing data against its knowledge base of typical behavior patterns. Autoencoders function as neural networks that learn to recreate data during operation. When an autoencoder fails to rebuild data, it automatically identifies the suspicious activity.


How AI is changing the GRC strategy

CISOs are in a tough spot because they have a dual mandate to increase productivity and leverage this powerful emerging technology, while still maintaining governance, risk and compliance obligations, according to Rich Marcus, CISO at AuditBoard. “They’re being asked to leverage AI or help accelerate the adoption of AI in organizations to achieve productivity gains. But don’t let it be something that kills the business if we do it wrong,” says Marcus. ... “The really important thing to be successful with managing AI risk is to approach the situation with a collaborative mindset and broadcast the message to folks that we’re all in it together and you’re not here to slow them down.” ... Ultimately, the task is for security leaders to apply a security lens to AI using governance and risk as part of the broader GRC framework in the organization. “A lot of organizations will have a chief risk officer or someone of that nature who owns the broader risk across the environment, but security should have a seat at the table,” Norton says. “These days, it’s no longer about CISOs saying ‘yes’ or ‘no’. It’s more about us providing visibility of the risks involved in doing certain things and then allowing the organization and the senior executives to make decisions around those risks.”


Three Invisible Hurdles to Innovation

Innovation changes internal power dynamics. The creation of a new line of business leads to a legacy line of business declining or, at an extreme, shutting down or being spun out. One part of the organization wins; another loses. Why would a department put forward or support a proposal that would put that department out of business or lead it to lose organizational influence? That means senior leaders might never see a proposal that’s good for the whole organization if it is bad for one part of the organization. ... While the natural language interface of OpenAI’s ChatGPT was easy the first time I used it, I wasn’t sure what to do with a large language model (LLM). First I tried to mimic a Google search, and then jumped in and tried to design a course from scratch. The lack of artfully constructed prompts on first-generation technology led to predictably disappointing results. For DALL-E, I tried to prove that AI couldn’t match the skills of my daughter, a skilled artist. Seeing mediocre results left me feeling smug, reaffirming my humanity. ... Social identity theory suggests that individuals often merge their personal identity with the offerings of the company at which they work. Ask them who they are, and they respond with what they do: “I’m a newspaper guy.” So imagine how Gilbert’s message landed with his employees who worked to produce a print newspaper every day.


Beyond Code Generation: How Asimov is Transforming Engineering Team Collaboration

The conventional wisdom around AI coding assistance has been misguided. Research shows that engineers spend only about 10% of their time writing code, while the remaining 70% is devoted to understanding existing systems, debugging issues, and collaborating with teammates on intricate problems. This reality exposes a significant gap in current AI tooling, which predominantly focuses on code generation rather than comprehension. “Engineers don’t spend most of their time writing code. They spend most of their time understanding code and collaborating with other teammates on hard problems,” explains the Reflection team. This insight drives Asimov’s unique approach to engineering productivity. ... As engineering teams grapple with increasingly complex systems and distributed architectures, tools like Asimov offer a glimpse into a future where AI serves as a genuine collaborative partner rather than just a code completion engine. By focusing on understanding and context rather than mere generation, Asimov addresses the actual pain points that slow down engineering teams. The tool is currently in early access, with Reflection AI selecting teams for initial deployment. 


Data Management Makes or Breaks AI Success for SLGs

“Many agencies start their AI journeys with a specific use case, something simple like a chatbot,” says John Whippen, regional vice president for U.S. public sector at Snowflake. “As they show the value of those individual use cases, they’ll attempt to make it more prevalent across an entire agency or department.” Especially in populous jurisdictions, readying data for large-scale AI initiatives can be challenging. Nevertheless, that initial data consolidation, governance and management are central to cross-agency AI deployments, according to Whippen and other industry experts. ... Most state agencies operate on a hybrid cloud model. Many of them work with multiple hyperscalers and likely will for the foreseeable future. This creates potential data fragmentation. However, where the data is stored is not necessarily as important as the ability to centralize how it is accessed, managed and manipulated. “Today, you can extract all of that data much more easily, from a user interface perspective, and manipulate it the way you want, then put it back into the system of record, and you don't need a data scientist for that,” says Mike Hurt, vice president of state and local government and education for ServiceNow. “It's not your grandmother's way of tagging anymore.”


The Role Of Empathy In Effective Leadership

To maintain good working relationships with others, you must be willing to understand their experiences and perspectives. As we all know, everyone sees the world through a different lens. Even if you don’t fully align with others’ worldviews, as a leader, you must create an environment where individuals feel heard and respected. ... Operate with perspective and cultivate inclusive practices. In a way, empathy is being able to see through the eyes of others. Many of the unspoken rules of the corporate world are based on the experience of white males in the workforce. Considering the countless other demographics in the modern workforce, most of these nuances or patterns are outdated, exclusionary, counterproductive, and even harmful to some people. Can you identify any unspoken rules you enforce or adhere to within your career? Sometimes, they are hard to spot right away. In my research as a DEI professional, I’ve encountered many unspoken cultural rules that don’t consider the perspective of diverse groups. ... Empathetic leaders create more harmonious workplaces and inspire their teams to perform better. Creating an atmosphere of acceptance and understanding sets the stage for healthier dynamics. In questioning the status quo, you root out any counterproductive trends in company culture that need addressing.


New Research on the Link Between Learning and Innovation

Cognitive neuroscience confirms what experienced leaders intuitively know: Our brains need structured breaks to turn experiences into actionable knowledge. Just as sleep helps consolidate daily experiences into long-term memory, structured reflection allows teams to integrate insights gained during exploration phases into strategies and plans. Without these deliberate rhythms, teams risk becoming overwhelmed by continual information intake—akin to endlessly inhaling without pausing to exhale—leading to confusion and burnout. By intentionally embedding reflective pauses within structured learning cycles, teams can harness their full innovative potential. ... You can think of a team’s learning activities as elements of a musical masterpiece. Just as great compositions—like Beethoven’s Fifth Symphony—skillfully balance moments of tension with moments of powerful resolution, effective team learning thrives on the structured interplay between building up and then releasing tension. Harmonious learning occurs when complementary activities, such as team reflection and external expert consultations, reinforce one another, creating moments of clarity and alignment. Conversely, dissonance arises when conflicting activities, like simultaneous experimentation and detailed planning, collide and cause confusion.


Optimizing Search Systems: Balancing Speed, Relevance, and Scalability

Efficiently managing geospatial search queries on Uber Eats is crucial, as users often seek outnearby restaurants or grocery stores. To achieve this, Uber Eats uses geo-sharding, a technique that ensures all relevant data for a specific location is stored within a single shard. This minimizes query overhead and eliminates inefficiencies caused by fetching and aggregating results from multiple shards. Additionally, geo sharding allows first-pass ranking to happen directly on data nodes, improving speed and accuracy. Uber Eats primarily employs two geo sharding techniques: latitude sharding and hex sharding. Latitude sharding divides the world into horizontal bands, with each band representing a distinct shard. Shard ranges are computed offline using Spark jobs, which first divide the map into thousands of narrow latitude stripes and then group adjacent stripes to create shards of roughly equal size. Documents falling on shard boundaries are indexed in both neighboring shards to prevent missing results. One key advantage of latitude sharding is its ability to distribute traffic efficiently across different time zones. Given that Uber Eats experiences peak activity following a "sun pattern" with high demand during the day and lower demand at night, this method helps prevent excessive load on specific shards. 


How to beat the odds in tech transformation

Creating an enterprise-wide technology solution requires defining a scope that’s ambitious and quickly actionable and has an underlying objective to keep your customers and organization on board throughout the project. ... Technology may seem even more autonomous, but tech transformations are not. They depend on the full engagement and alignment of people across your organization, starting with leadership. First, senior leaders need to be educated so they clearly understand not just the features of the new technology but more so the business benefits. This will motivate them to champion engagement and adoption throughout the organization. ... Even the best-planned journeys to new frontiers will run into unexpected challenges. For instance, while we had extensively planned for customer migration during our tech transformation, the effort required to make it go as quickly and smoothly as possible was greater than expected. After all, we provide mission-critical solutions, so customers didn’t simply want to know we had validated a new product. They wanted reassurance we had validated their specific use cases. In response, we doubled down on resources to give them enhanced confidence. As mentioned, we introduced a protocol of parallel systems, running the old and new simultaneously. 


Leadership vs. Management in Project Management: Walking the Tightrope Between Vision and Execution

At its core, management is about control. It’s the science of organising tasks, allocating resources, and ensuring deliverables meet specifications. Managers thrive on Gantt charts, risk matrices, and status reports. They’re the architects of order in a world prone to chaos.. It’s the science of organising tasks, allocating resources, and ensuring deliverables meet specifications. Managers thrive on Gantt charts, risk matrices, and status reports. They’re the architects of order in a world prone to chaos. Leadership, on the other hand, is about inspiration. It’s the art of painting a compelling vision, rallying teams around a shared purpose, and navigating uncertainty with grit. ... A project manager’s IQ might land them the job, but their EQ determines their success. Leadership in project management isn’t just about charisma—it’s about sensing unspoken tensions, motivating burnt-out teams, and navigating stakeholder egos. ... The debate between leadership and management is a false dichotomy. Like yin and yang, they’re interdependent forces. A project manager who only manages becomes a bureaucrat, obsessed with checkboxes but blind to the bigger picture. One who only leads becomes a dreamer, chasing visions without a roadmap. The future belongs to hybrids—those who can rally a team with a compelling vision and deliver a flawless product on deadline.