Daily Tech Digest - May 05, 2025


Quote for the day:

"Listening to the inner voice and trusting the inner voice is one of the most important lessons of leadership." -- Warren Bennis


How CISOs can talk cybersecurity so it makes sense to executives

“With complex technical topics and evolving threats to cover, the typical brief time slot often proves inadequate for meaningful dialogue. Security leaders can address this by preparing concise, business-focused briefing materials in advance and prioritizing the most critical issues for discussion. When time constraints persist, they should advocate for dedicated sessions to ensure proper oversight of cybersecurity matters,” said Ross Young ... When communicating with the board of directors, Turgal advises mapping cybersecurity initiatives to shareholder value. “If the business goal is to protect shareholder value, there is a direct connection to business continuity and increased operational uptime.” To support that, security leaders might increase cyber resilience through containerized immutable backups, disaster recovery and incident response plans—tools that can mitigate brand-damaging attacks and prevent stock price volatility. ... Some of the most productive conversations don’t happen in meetings. They happen over coffee, or on calls with individual board members.​ If possible, schedule one-on-ones with directors to walk them through key risks. Ask what they want to know more about. Find out how they prefer to receive information.​ By building rapport outside the meeting, you’ll face fewer surprises inside it. Your strongest allies in the boardroom are often the CFO and legal chief. 


The great cognitive migration: How AI is reshaping human purpose, work and meaning

Human purpose and meaning are likely to undergo significant upheaval. For centuries, we have defined ourselves by our ability to think, reason and create. Now, as machines take on more of those functions, the questions of our place and value become unavoidable. If AI-driven job losses occur on a large scale without a commensurate ability for people to find new forms of meaningful work, the psychological and social consequences could be profound. It is possible that some cognitive migrants could slip into despair. AI scientist Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his groundbreaking work on deep learning neural networks that underpin LLMs, has warned in recent years about the potential harm that could come from AI. In an interview with CBS, he was asked if he despairs about the future. He said he did not because, ironically, he found it very hard to take [AI] seriously. He said: “It’s very hard to get your head around the point that we are at this very special point in history where in a relatively short time, everything might totally change. A change on a scale we’ve never seen before. It’s hard to absorb that emotionally.” There will be paths forward. Some researchers and economists, including MIT economist David Autor, have begun to explore how AI could eventually help rebuild middle-class jobs, not by replacing human workers, but by expanding what humans can do. 


CISO vs CFO: why are the conversations difficult?

The disconnect between CISOs and CFOs remains a challenge in many organizations. While cybersecurity threats escalate in scale and complexity, senior leadership often fails to fully grasp the magnitude of the risk. This gap is visible in EY’s 2025 Cybersecurity study, which shows that 68% of CISOs worry that senior leaders underestimate the risks. Progress in bridging this divide happens when CISOs and CFOs are willing to meet halfway, aligning technical priorities with financial realities. Argyle realized that to move the conversation forward, he had to change his approach: he stopped defending the technology and started showing the impact. ... Redesigning the relationship between a CISO and a CFO isn’t something that’s fixed over a single meeting or a strong cup of coffee. It takes time, mutual understanding, and open conversations. As Argyle points out, these discussions shouldn’t be limited to budget season, when both sides are already in negotiation mode. To truly build trust and alignment, CISOs and CFOs need to keep the dialogue alive year-round and make efforts to understand each other’s work, long before money is involved. “Ideally, I’d bring the CFO into tabletop cyber crisis simulations and scenario planning,” he adds. “Let them see the domino effect of a breach — not just read about it in a report. That firsthand exposure builds understanding faster than any PowerPoint.”


How to Build a Team That Thinks and Executes Like a Founder

If your team has a deep understanding of what you are trying to accomplish, you can ensure that everyone is rowing in the same direction. It isn't enough to simply share your vision and goals. To really get the team engaged, it's critical that they understand the underlying "why" behind your goals and decisions. One of the best ways to do this is by being as transparent as possible, such as sharing financial data and other key business metrics. This information can help the team understand the bigger picture and connect how their individual roles contribute to the overall success of the company. ... First, stop assigning tasks to your team. Instead, give team members ownership over entire end-to-end processes. This allows them to take full responsibility for the success of this process and help you hold the team accountable for executing it successfully. The best way to do this is by focusing on outcome-based delegation. This provides flexibility and autonomy for the team to figure out the best way to achieve the goal. As a business owner, you don't want the team coming to you for every little decision. ... n many cases, a bad deliverable is a result of miscommunication, unclear direction or not having access to the right resources. The challenge is that many business owners give up when delegation doesn't work the way they hoped the first time. 


Quiet hiring: How HR can turn this trend into a winning strategy

At its heart, quiet hiring is about strategic talent management. It’s a way for organisations to fill skill gaps and meet changing business needs without expanding their workforce in the traditional sense. Instead of hiring full-time employees, businesses tap into existing employees, freelancers, or contractors to temporarily shift roles or tackle specific projects. It’s about working smarter with the talent you already have, and supplementing that with external experts when needed. ... Instead of looking outside the organisation to fill a gap, businesses can move current employees into new roles or give them additional responsibilities. For instance, if a marketing expert has experience with analytics, they might temporarily shift to the data analytics team to support a busy period. Not only does this save the company time and money in recruitment, but it also develops your current team, gives employees fresh opportunities, and fosters an agile workforce. It’s a win-win—employees gain new skills, and organisations can fill critical gaps without the lengthy hiring process. ... The business world is unpredictable, and the ability to adapt quickly is more important than ever. Quiet hiring offers companies the flexibility they need to respond to sudden changes. For example, if demand for a product surges unexpectedly, internal employees can be quickly moved to meet the increased workload, while contractors can be brought in to handle the temporary increase in tasks.


Attack of the AI crawlers

To be fair, it’s not entirely clear that robots.txt directives are legally enforceable, according to Susskind and other attorneys who focus on technology issues. Therefore, if the model makers were arguing that they have the right to violate those requests, that might be a legitimate argument. But that is not what they are arguing. They are saying they abide by those rules, but then many send out undeclared crawlers to do it anyway. The real problem is that they are inflicting financial damage to the site owners by forcing them to pay far more for bandwidth. And it is solely the model makers that benefit, not the site owners. What is IT to do, Susskind asked, when an undeclared genAI crawler “hits my site a million times a day”? Indeed, Susskind’s team has seen “a single bot hitting a site millions of times per hour. That is several orders of magnitude more burdensome than normal SEO crawling.” ... The problem, according to attorneys in this space, is not with establishing monetary damages but with attribution: how to determine who’s responsible for the surging traffic. In such a hypothetical court case, the lawyers for the deep-pocketed genAI model makers would likely argue that plaintiffs’ sites are visited by millions of users and bots from multiple sources. Without proof tying traffic to a specific crawler or tying a crawler to a specific model maker, the model maker can’t be held accountable for plaintiffs’ financial damages.


A Farewell to APMs — The Future of Observability is MCP tools

Initially introduced by Anthropic, the Model Context Protocol (MCP) represents a communication tier between AI agents and other applications, allowing agents to access additional data sources and perform actions as they see fit. More importantly, MCPs open up new horizons for the agent to intelligently choose to act beyond its immediate scope and thereby broaden the range of use cases it can address. The technology is not new, but the ecosystem is. In my mind, it is the equivalent of evolving from custom mobile application development to having an app store. ... With the advent of MCPs, software developers now have the choice of adopting a different model for developing software. Instead of focusing on a specific use case, trying to nail the right UI elements for hard-coded usage patterns, applications can transform into a resource for AI-driven processes. This describes a shift from supporting a handful of predefined interactions to supporting numerous emergent use cases. ... Making observability useful to the agent, however, is a little more involved than slapping on an MCP adapter to an APM. Indeed, many of the current generation tools, in rushing to support the new technology took that very route, not taking into consideration that AI agents also have their limitations.


Knowing when to use AI coding assistants

AI performs exceptionally well with common coding patterns. Its sweet spot is generating new code with low complexity when your objectives are well-specified and you’re using popular libraries, says Swiber. “Web development, mobile development, and relatively boring back-end development are usually fairly straightforward,” adds Charity Majors, co-founder and CTO of Honeycomb. The more common the code and the more online examples, the better AI models perform. ... While AI accelerates development, it creates a new burden to review and validate the resulting code. “In a worst-case scenario, the time and effort required to debug and fix subtle issues in AI-generated code could even eclipse the time it would require to write the code from scratch,” says Sonar’s Wang. Quality and security can suffer from vague prompts or poor contextual understanding, especially in large, complex code bases. Transformer-based models also face limitations with token windows, making it harder to grasp projects with many parts or domain-specific constraints. “We’ve seen cases where AI outputs are syntactically correct but contain logical errors or subtle bugs,” Wang notes. These mistakes originate from a “black box” process, he says, making AI risky for mission-critical enterprise applications that require strict governance.


CISOs Take Note: Is Needless Cybersecurity Strangling Your Business?

"For IT and security teams, redundant and obsolete security tools or measures increase workflows, hurt efficiency, and extend incident response and patch time," he explains via email. "When there's excessive or ineffective tools in the security stack, teams waste valuable time sifting through redundant and low-value alerts, hampering them from focusing on real threats." ... Additionally, excessive security controls, such as overly intrusive multi-factor authentication, can create employee friction, slowing down and challenging collaboration with partners, vendors, and customers, Shilts says. "This often results in employees finding workarounds, such as using their personal emails, which introduces security risks that are difficult to track and manage." ... In general, an organizational security posture, including tools and procedures, should be assessed annually or even earlier if a major change is implemented, Biswas says. Ideally, to prevent conflicts of interest, such assessments should be performed by independent, expert third parties. "After all, it’s difficult for an implementor or operator to be a truly impartial assessor of their own work," he explains. "While some organizations may be able to do so via internal audit, for most it makes sense to hire an outsider to play devil’s advocate."


Machines Cannot Feel or Think, but Humans Can, and Ought To

In a philosophical debate, the question, as it is applied to AI, is: How do we know that AI does not have an experience of the world? The same question could be asked of flowers, animals, stones, and automobiles. In this sense, the question of “other intelligences” is often quite valuable and holds tremendous potential for escaping the capital-focused development of information processing machines. In its most useful form, this approach to “post-humanism” refers to the evolved understanding that humans are not the center of the universe, but exist within a dense network of relationships. This definition of the post-human may pave the way to decentering definitions of “human” that privilege human needs over those of the environment, or even people whom we consider less-than. It may cultivate a deeper appreciation for the complexity of animals and their ecosystems, and, through careful design, might lead to an approach to technological development that considers the interdependencies within systems as connected, not isolated. Have we even started to build a capacity to understand those worlds, to empathize with trees and rivers and elk, to the extent to which we can now fully shift our attention to the potential emotional experiences of a hypothetical Microsoft product? 

Daily Tech Digest - May 03, 2025


Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis



Why agentic AI is the next wave of innovation

AI agents have become integral to modern enterprises, not just enhancing productivity and efficiency, but unlocking new levels of value through intelligent decision-making and personalized experiences. The latest trends indicate a significant shift towards proactive AI agents that anticipate user needs and act autonomously. These agents are increasingly equipped with hyper-personalization capabilities, tailoring interactions based on individual preferences and behaviors. ... According to NVIDIA, when Azure AI Agent Service is paired with NVIDIA AgentIQ, an open-source toolkit, developers can now profile and optimize teams of AI agents in real time to reduce latency, improve accuracy, and drive down compute costs. ... “The launch of NVIDIA NIM microservices in Azure AI Foundry offers a secure and efficient way for Epic to deploy open-source generative AI models that improve patient care, boost clinician and operational efficiency, and uncover new insights to drive medical innovation,” says Drew McCombs, vice president, cloud and analytics at Epic. “In collaboration with UW Health and UC San Diego Health, we’re also researching methods to evaluate clinical summaries with these advanced models. Together, we’re using the latest AI technology in ways that truly improve the lives of clinicians and patients.”


Businesses intensify efforts to secure data in cloud computing

Building a robust security strategy begins with understanding the delineation between the customer's and the provider's responsibilities. Customers are typically charged with securing network controls, identity and access management, data, and applications within the cloud, while the CSP maintains the core infrastructure. The specifics of these responsibilities depend on the service model and provider in question. The importance of effective cloud security has grown as more organisations shift away from traditional on-premises infrastructure. This shift brings new regulatory expectations relating to data governance and compliance. Hybrid and multicloud environments offer businesses unprecedented flexibility, but also introduce complexity, increasing the challenge of preventing unauthorised access. ... Attackers are adjusting their tactics accordingly, viewing cloud environments as potentially vulnerable targets. A well-considered cloud security plan is regarded as essential for reducing breaches or damage, improving compliance, and enhancing customer trust, even if it cannot eliminate all risks. According to the statement, "A well-thought-out cloud security plan can significantly reduce the likelihood of breaches or damage, enhance compliance, and increase customer trust—even though it can never completely prevent attacks and vulnerabilities."


Safeguarding the Foundations of Enterprise GenAI

Implementing strong identity security measures is essential to mitigate risks and protect the integrity of GenAI applications. Many identities have high levels of access to critical infrastructure and, if compromised, could provide attackers with multiple entry points. It is important to emphasise that privileged users include not just IT and cloud teams but also business users, data scientists, developers and DevOps engineers. A compromised developer identity, for instance, could grant access to sensitive code, cloud functions, and enterprise data. Additionally, the GenAI backbone relies heavily on machine identities to manage resources and enforce security. As machine identities often outnumber human ones, securing them is crucial. Adopting a Zero Trust approach is vital, extending security controls beyond basic authentication and role-based access to minimise potential attack surfaces. To enhance identity security across all types of identities, several key controls should be implemented. Enforcing strong adaptive multi-factor authentication (MFA) for all user access is essential to prevent unauthorised entry. Securing access to credentials, keys, certificates, and secrets—whether used by humans, backend applications, or scripts—requires auditing their use, rotating them regularly, and ensuring that API keys or tokens that cannot be automatically rotated are not permanently assigned.


The new frontier of API governance: Ensuring alignment, security, and efficiency through decentralization

To effectively govern APIs in a decentralized landscape, organizations must embrace new principles that foster collaboration, flexibility and shared responsibility. Optimized API governance is not about abandoning control, rather about distributing it strategically while still maintaining overarching standards and ensuring critical aspects such as security, compliance and quality. This includes granting development teams with autonomy to design, develop and manage their APIs within clearly defined boundaries and guidelines. This encourages innovation while fostering ownership and allows each team to optimize their APIs to their specific needs. This can be further established by a shared responsibility model amongst teams where they are accountable for adhering to governance policies while a central governing body provides the overarching framework, guidelines and support. This operating model can be further supported by cultivating a culture of collaboration and communication between central governance teams and development teams. The central government team can have a representative from each development team and have clear channels for feedback, shared documentation and joint problem-solving scenarios. Implementing governance policies as code, leveraging tools and automation make it easier to enforce standards consistently and efficiently across the decentralized environment. 


Banking on innovation: Engineering excellence in regulated financial services

While financial services regulations aren’t likely to get simpler, banks are finding ways to innovate without compromising security. "We’re seeing a culture change with our security office and regulators," explains Lanham. "As cloud tech, AI, and LLMs arrive, our engineers and security colleagues have to upskill." Gartner's 2025 predictions say GenAI is shifting data security to protect unstructured data. Rather than cybersecurity taking a gatekeeper role, security by design is built into development processes. "Instead of saying “no”, the culture is, how can we be more confident in saying “yes”?" notes Lanham. "We're seeing a big change in our security posture, while keeping our customers' safety at the forefront." As financial organizations carefully tread a path through digital and AI transformation, the most successful will balance innovation with compliance, speed with security, and standardization with flexibility. Engineering excellence in financial services needs leaders who can set a clear vision while balancing tech potential with regulations. The path won’t be simple, but by investing in simplification, standardization and a shared knowledge and security culture, financial services engineering teams can drive positive change for millions of banking customers.


‘Data security has become a trust issue, not just a tech issue’

Data is very messy and data ecosystems are very complex. Every organisation we speak to has data across multiple different types of databases and data stores for different use cases. As an industry, we need to acknowledge the fact that no organisation has an entirely homogeneous data stack, so we need to support and plug into a wide variety of data ecosystems, like Databricks, Google and Amazon, regardless of the tooling used for data analytics, for integration, for quality, for observability, for lineage and the like. ... Cloud adoption is causing organisations to rethink their traditional approach to data. Most use cloud data services to provide a shortcut to seamless data integration, efficient orchestration, accelerated data quality and effective governance. In reality, most organisations will need to adopt a hybrid approach to address their entire data landscape, which typically spans a wide variety of sources that span both cloud and on premises. ... Data security has become a trust issue, not just a tech issue. With AI, hybrid cloud and complex supply chains, the attack surface is massive. We need to design with security in mind from day one – think secure coding, data-level controls and zero-trust principles. For AI, governance is critical, and it too needs to be designed in and not an afterthought. That means tracking where data comes from, how models are trained, and ensuring transparency and fairness.


Secure by Design vs. DevSecOps: Same Security Goal, Different Paths

Although the "secure by design" initiative offers limited guidance on how to make an application secure by default, it comes closer to being a distinct set of practices than DevSecOps. The latter is more of a high-level philosophy that organizations must interpret on their own; in contrast, secure by design advocates specific practices, such as selecting software architectures that mitigate the risk of data leakage and avoiding memory management practices that increase the chances of the execution of malicious code by attackers. ... Whereas DevSecOps focuses on all stages of the software development life cycle, the secure by design concept is geared mainly toward software design. It deals less with securing applications during and after deployment. Perhaps this makes sense because so long as you start with a secure design, you need to worry less about risks once your application is fully developed — although given that there's no way to guarantee an app can't be hacked, DevSecOps' holistic approach to security is arguably the more responsible one. ... Even if you conclude that secure by design and DevSecOps mean basically the same thing, one notable difference is that the government sector has largely driven the secure by design initiative, while DevSecOps is more popular within private industry.


Immutable by Design: Reinventing Business Continuity and Disaster Recovery

Immutable backups create tamper-proof copies of data, protecting it from cyber threats, accidental deletion, and corruption. This guarantees that critical data can be quickly restored, allowing businesses to recover swiftly from disruptions. Immutable storage provides data copies that cannot be manipulated or altered, ensuring data remains secure and can quickly be recovered from an attack. In addition to immutable backup storage, response plans must be continually tested and updated to combat the evolving threat landscape and adapt to growing business needs. The ultimate test of a response plan ensures data can be quickly and easily restored or failed over, depending on the event. Activating a second site in the case of a natural disaster or recovering systems without making any ransomware payments in the case of an attack. This testing involves validating the reliability of backup systems, recovery procedures, and the overall disaster recovery plan to minimize downtime and ensure business continuity. ... It can be challenging for IT teams trying to determine the perfect fit for their ecosystem, as many storage vendors claim to provide immutable storage but are missing key features. As a rule of thumb, if "immutable" data can be overwritten by a backup or storage admin, a vendor, or an attacker, then it is not a truly immutable storage solution. 


Neurohacks to outsmart stress and make better cybersecurity decisions

In cybersecurity where clarity and composure are essential, particularly during a data breach or threat response, these changes can have high-stakes consequences. “The longer your brain is stuck in this high-stress state, the more of those changes you will start to see and burnout is just an extreme case of chronic stress on the brain,” Landowski says. According to her, the tipping point between healthy stress and damaging chronic stress usually comes after about eight to 12 weeks, but it varies between individuals. “If you know about some of the things you can do to reduce the impact of stress on your body, you can potentially last a lot longer before you see any effects, whereas if you’re less resilient, or if your genes are more susceptible to stress, then it could be less.” ... working in cybersecurity, particularly as a hacker, is often about understanding how people think and then spotting the gaps. That same shift in understanding — tuning into how the brain works under different conditions — can help cybersecurity leaders make better decisions and build more resilient teams. As Cerf highlights, he works with organizations to identify these optimal operating states, testing how individuals and entire teams respond to stress and when their brains are most effective. “The brain is not just a solid thing,” Cerf says.


Beyond Safe Models: Why AI Governance Must Tackle Unsafe Ecosystems

Despite the evident risks of unsafe deployment ecosystems, the prevailing approach to AI governance still heavily emphasizes pre-deployment interventions—such as alignment research, interpretability tools, and red teaming—aimed at ensuring that the model itself is technically sound. Governance initiatives like the EU AI Act, while vital, primarily place obligations on providers and developers to ensure compliance through documentation, transparency, and risk management plans. However, the governance of what happens after deployment when these models enter institutions with their own incentives, infrastructures, and oversight receives comparatively less attention. For example, while the EU AI Act introduces post-market monitoring and deployer obligations for high-risk AI systems, these provisions remain limited in scope. Monitoring primarily focuses on technical compliance and performance, with little attention to broader institutional, social, or systemic impacts. Deployer responsibilities are only weakly integrated into ongoing risk governance and focus primarily on procedural requirements—such as record-keeping and ensuring human oversight—rather than assessing whether the deploying institution has the capacity, incentives, or safeguards to use the system responsibly. 

Daily Tech Digest - May 01, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden



Bridging the IT and security team divide for effective incident response

One reason IT and security teams end up siloed is the healthy competitiveness that often exists between them. IT wants to innovate, while security wants to lock things down. These teams are made up from brilliant minds. However, faced with the pressure of a crisis, they might hesitate to admit they feel out of control, simmering issues may come to a head, or they may become so fixated on solving the issue that they fail to update others. To build an effective incident response strategy, identifying a shared vision is essential. Here, leadership should host joint workshops where teams learn more about each other and share ideas about embedding security into system architecture. These sessions should also simulate real-world crises, so that each team is familiar with how their roles intersect during a high-pressure situation and feel comfortable when an actual crisis arises. ... By simulating realistic scenarios – whether it’s ransomware incidents or malware attacks – those in leadership positions can directly test and measure the incident response plan so that is becomes an ingrained process. Throw in curveballs when needed, and use these exercises to identify gaps in processes, tools, or communication. There’s a world of issues to uncover disconnected tools and systems; a lack of automation that could speed up response times; and excessive documentation requirements.


First Principles in Foundation Model Development

The mapping of words and concepts into high-dimensional vectors captures semantic relationships in a continuous space. Words with similar meanings or that frequently appear in similar contexts are positioned closer to each other in this vector space. This allows the model to understand analogies and subtle nuances in language. The emergence of semantic meaning from co-occurrence patterns highlights the statistical nature of this learning process. Hierarchical knowledge structures, such as the understanding that “dog” is a type of “animal,” which is a type of “living being,” develop organically as the model identifies recurring statistical relationships across vast amounts of text. ... The self-attention mechanism represents a significant architectural innovation. Unlike recurrent neural networks that process sequences sequentially, self-attention allows the model to consider all parts of the input sequence simultaneously when processing each word. The “dynamic weighting of contextual relevance” means that for any given word in the input, the model can attend more strongly to other words that are particularly relevant to its meaning in that specific context. This ability to capture long-range dependencies is critical for understanding complex language structures. The parallel processing capability significantly speeds up training and inference. 


The best preparation for a password-less future is to start living there now

One of the big ideas behind passkeys is to keep us users from behaving as our own worst enemies. For nearly two decades, malicious actors -- mainly phishers and smishers -- have been tricking us into giving them our passwords. You'd think we would have learned how to detect and avoid these scams by now. But we haven't, and the damage is ongoing. ... But let's be clear: Passkeys are not passwords. If we're getting rid of passwords, shouldn't we also get rid of the phrase "password manager?" Note that there are two primary types of credential managers. The first is the built-in credential manager. These are the ones from Apple, Google, Microsoft, and some browser makers built into our platforms and browsers, including Windows, Edge, MacOS, Android, and Chrome. With passkeys, if you don't bring your own credential manager, you'll likely end up using one of these. ... The FIDO Alliance defines a "roaming authenticator" as a separate device to which your passkeys can be securely saved and recalled. Examples are hardware security keys (e.g., Yubico) and recent Android phones and tablets, which can act in the capacity of a hardware security key. Since your credentials to your credential manager are literally the keys to your entire kingdom, they deserve some extra special security.


Mind the Gap: Assessing Data Quality Readiness

Data Quality Readiness is defined as the ratio of the number of fully described Data Quality Measure Elements that are being calculated and/or collected to the number of Data Quality Measure Elements in the desired set of Data Quality Measures. By fully described I mean both the “number of data values” part and the “that are outliers” part. The first prerequisite activity is determining which Quality Measures you want to implement. The ISO standard defines 15 different Data Quality Characteristics. I covered those last time. The Data Quality Characteristics are made up of 63 Quality Measures. The Quality Measures are categorized as Highly Recommendable (19), Recommendable (36), and For Reference (8). This provides a starting point for prioritization. Begin with a few measures that are most applicable to your organization and that will have the greatest potential to improve the quality of your data. The reusability of the Quality Measures can factor into the decision, but it shouldn’t be the primary driver. The objective is not merely to collect information for its own sake, but to use that information to generate value for the enterprise. The result will be a set of Data Quality Measure Elements to collect and calculate. You do the ones that are best for you, but I would recommend looking at two in particular.


Why non-human identity security is the next big challenge in cybersecurity

What makes this particularly challenging is that each of these identities requires access to sensitive resources and carries potential security risks. Unlike human users, who follow predictable patterns and can be managed through traditional IAM solutions, non-human identities operate 24/7, often with elevated privileges, making them attractive targets for attackers. ... We’re witnessing a paradigm shift in how we need to think about identity security. Traditional security models were built around human users – focusing on aspects like authentication, authorisation and access management from a human-centric perspective. But this approach is inadequate for the machine-dominated future we’re entering. Organisations need to adopt a comprehensive governance framework specifically designed for non-human identities. This means implementing automated discovery and classification of all machine identities and their secrets, establishing centralised visibility and control and enforcing consistent security policies across all platforms and environments. ... First, organisations need to gain visibility into their non-human identity landscape. This means conducting a thorough inventory of all machine identities and their secrets, their access patterns and their risk profiles.


Preparing for the next wave of machine identity growth

First, let’s talk about the problem of ownership. Even organizations that have conducted a thorough inventory of the machine identities in their environments often lack a clear understanding of who is responsible for managing those identities. In fact, 75% of the organizations we surveyed indicated that they don’t have assigned ownership for individual machine identities. That’s a real problem—especially since poor (or insufficient) governance practices significantly increase the likelihood of compromised access, data loss, and other negative outcomes. Another critical blind spot is around understanding what data each machine identity can or should be able to access—and just as importantly, what it cannot and should not access. Without clarity, it becomes nearly impossible to enforce proper security controls, limit unnecessary exposure, or maintain compliance. Each machine identity is a potential access point to sensitive data and critical systems. Failing to define and control their access scope opens the door to serious risk. Addressing the issue starts with putting a comprehensive machine identity security solution in place—ideally one that lets organizations govern machine identities just as they do human identities. Automation plays a critical role: with so many identities to secure, a solution that can discover, classify, assign ownership, certify, and manage the full lifecycle of machine identities significantly streamlines the process.


To Compete, Banking Tech Needs to Be Extensible. A Flexible Platform is Key

The banking ecosystem includes three broad stages along the trajectory toward extensibility, according to Ryan Siebecker, a forward deployed engineer at Narmi, a banking software firm. These include closed, non-extensible systems — typically legacy cores with proprietary software that doesn’t easily connect to third-party apps; systems that allow limited, custom integrations; and open, extensible systems that allow API-based connectivity to third-party apps. ... The route to extensibility can be enabled through an internally built, custom middleware system, or institutions can work with outside vendors whose systems operate in parallel with core systems, including Narmi. Michigan State University Federal Credit Union, which began its journey toward extensibility in 2009, pursued an independent route by building in-house middleware infrastructure to allow API connectivity to third-party apps. Building in-house made sense given the early rollout of extensible capabilities, but when developing a toolset internally, institutions need to consider appropriate staffing levels — a commitment not all community banks and credit unions can make. For MSUFCU, the benefit was greater customization, according to the credit union’s chief technology officer Benjamin Maxim. "With the timing that we started, we had to do it all ourselves," he says, noting that it took about 40 team members to build a middleware system to support extensibility.


5 Strategies for Securing and Scaling Streaming Data in the AI Era

Streaming data should never be wide open within the enterprise. Least-privilege access controls, enforced through role-based (RBAC) or attribute-based (ABAC) access control models, limit each user or application to only what’s essential. Fine-grained access control lists (ACLs) add another layer of protection, restricting read/write access to only the necessary topics or channels. Combine these controls with multifactor authentication, and even a compromised credential is unlikely to give attackers meaningful reach. ... Virtual private cloud (VPC) peering and private network setups are essential for enterprises that want to keep streaming data secure in transit. These configurations ensure data never touches the public internet, thus eliminating exposure to distributed denial of service (DDoS), man-in-the-middle attacks and external reconnaissance. Beyond security, private networking improves performance. It reduces jitter and latency, which is critical for applications that rely on subsecond delivery or AI model responsiveness. While VPC peering takes thoughtful setup, the benefits in reliability and protection are well worth the investment. ... Just as importantly, security needs to be embedded into culture. Enterprises that regularly train their employees on privacy and data protection tend to identify issues earlier and recover faster.


Supply Chain Cybersecurity – CISO Risk Management Guide

Modern supply chains often span continents and involve hundreds or even thousands of third-party vendors, each with their security postures and vulnerabilities. Attackers have recognized that breaching a less secure supplier can be the easiest way to compromise a well-defended target. Recent high-profile incidents have shown that supply chain attacks can lead to data breaches, operational disruptions, and significant financial losses. The interconnectedness of digital systems means that a single compromised vendor can have a cascading effect, impacting multiple organizations downstream. For CISOs, this means that traditional perimeter-based security is no longer sufficient. Instead, a holistic approach must be taken that considers every entity with access to critical systems or data as a potential risk vector. ... Building a secure supply chain is not a one-time project—it’s an ongoing journey that demands leadership, collaboration, and adaptability. CISOs must position themselves as business enablers, guiding the organization to view cybersecurity not as a barrier but as a competitive advantage. This starts with embedding cybersecurity considerations into every stage of the supplier lifecycle, from onboarding to offboarding. Leadership engagement is crucial: CISOs should regularly brief the executive team and board on supply chain risks, translating technical findings into business impacts such as potential downtime, reputational damage, or regulatory penalties.


Developers Must Slay the Complexity and Security Issues of AI Coding Tools

Beyond adding further complexity to the codebase, AI models also lack the contextual nuance that is often necessary for creating high-quality, secure code, primarily when used by developers who lack security knowledge. As a result, vulnerabilities and other flaws are being introduced at a pace never before seen. The current software environment has grown out of control security-wise, showing no signs of slowing down. But there is hope for slaying these twin dragons of complexity and insecurity. Organizations must step into the dragon’s lair armed with strong developer risk management, backed by education and upskilling that gives developers the tools they need to bring software under control. ... AI tools increase the speed of code delivery, enhancing efficiency in raw production, but those early productivity gains are being overwhelmed by code maintainability issues later in the SDLC. The answer is to address those issues at the beginning, before they put applications and data at risk. ... Organizations involved in software creation need to change their culture, adopting a security-first mindset in which secure software is seen not just as a technical issue but as a business priority. Persistent attacks and high-profile data breaches have become too common for boardrooms and CEOs to ignore. 

Daily Tech Digest - April 30, 2025


Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown


Common Pitfalls and New Challenges in IT Automation

“You don’t know what you don’t know and can’t improve what you can’t see. Without process visibility, automation efforts may lead to automating flawed processes. In effect, accelerating problems while wasting both time and resources and leading to diminished goodwill by skeptics,” says Kerry Brown, transformation evangelist at Celonis, a process mining and process intelligence provider. The aim of automating processes is to improve how the business performs. That means drawing a direct line from the automation effort to a well-defined ROI. ... Data is arguably the most boring issue on IT’s plate. That’s because it requires a ton of effort to update, label, manage and store massive amounts of data and the job is never quite done. It may be boring work, but it is essential and can be fatal if left for later. “One of the most significant mistakes CIOs make when approaching automation is underestimating the importance of data quality. Automation tools are designed to process and analyze data at scale, but they rely entirely on the quality of the input data,” says Shuai Guan, co-founder and CEO at Thunderbit, an AI web scraper tool. ... "CIOs often fall into the trap of thinking automation is just about suppressing noise and reducing ticket volumes. While that’s one fairly common use case, automation can offer much more value when done strategically,” says Erik Gaston


Outmaneuvering Tariffs: Navigating Disruption with Data-Driven Resilience

The fact that tariffs are coming was expected – President Donald Trump campaigned promising tariffs – but few could have expected their severity (145% on Chinese imports, as of this writing) and their pace of change (prohibitively high “reciprocal” tariffs on 100+ countries, only to be temporarily rescinded days later). Also unpredictable were second-order effects such as stock and bond market reactions, affecting the cost of capital, and the impact on consumer demand, due to the changing expectations of inflation or concerns of job loss. ... Most organizations will have fragmented views of data, including views of all of the components that come from a given supplier or are delivered through a specific transportation provider. They may have a product-centric view that includes all suppliers that contribute all of the components of a given product. But this data often resides in a variety of supplier-management apps, procurement apps, demand forecasting apps, and other types of apps. Some may be consolidated into a data lakehouse or a cloud data warehouse to enable advanced analytics, but the time required by a data engineering team to build the necessary data pipelines from these systems is often multiple days or weeks, and such pipelines will usually only be implemented for scenarios that the business expects will be stable over time.


The state of intrusions: Stolen credentials and perimeter exploits on the rise, as phishing wanes

What’s worrying is that in over half of intrusions (57%) the victim organizations learned about the compromise of their networks and systems from a third-party rather than discovering them through internal means. In 14% of cases, organizations were notified directly by attackers, usually in the form of ransom notes, but 43% of cases involved external entities such as a cybersecurity company or law enforcement agencies. The average time attackers spent inside a network until being discovered last year was 11 days, a one-day increase over 2023, though still a major improvement versus a decade ago when the average discovery time was 205 days. Attacker dwell time, as Mandiant calls it, has steadily decreased over the years, which is a good sign ... In terms of ransomware, the most common infection vector observed by Mandiant last year were brute-force attacks (26%), such as password spraying and use of common default credentials, followed by stolen credentials and exploits (21% each), prior compromises resulting in sold access (15%), and third-party compromises (10%). Cloud accounts and assets were compromised through phishing (39%), stolen credentials (35%), SIM swapping (6%), and voice phishing (6%). Over two-thirds of cloud compromises resulted in data theft and 38% were financially motivated with data extortion, business email compromise, ransomware, and cryptocurrency fraud being leading goals.


Three Ways AI Can Weaken Your Cybersecurity

“Slopsquatting” is a fresh AI take on “typosquatting,” where ne’er-do-wells spread malware to unsuspecting Web travelers who happen to mistype a URL. With slopsquatting, the bad guys are spreading malware through software development libraries that have been hallucinated by GenAI. ... While it is still unclear whether the bad guys have weaponized slopsquatting yet, GenAI’s tendency to hallucinate software libraries is perfectly clear. Last month, researchers published a paper that concluded that GenAI recommends Python and JavaScript libraries that don’t exist about one-fifth of the time. ... Like the SQL injection attacks that plagued early Web 2.0 warriors who didn’t adequately validate database input fields, prompt injections involve the surreptitious injection of a malicious prompt into a GenAI-enabled application to achieve some goal, ranging from information disclosure and code execution rights. Mitigating these sorts of attacks is difficult because of the nature of GenAI applications. Instead of inspecting code for malicious entities, organizations must investigate the entirery of a model, including all of its weights. ... A form of adversarial AI attacks, data poisoning or data manipulation poses a serious risk to organizations that rely on AI. According to the security firm CrowdStrike, data poisoning is a risk to healthcare, finance, automotive, and HR use cases, and can even potentially be used to create backdoors.


AI Has Moved From Experimentation to Execution in Enterprise IT

According to the SOAS report, 94% of organisations are deploying applications across multiple environments—including public clouds, private clouds, on-premises data centers, edge computing, and colocation facilities—to meet varied scalability, cost, and compliance requirements. Consequently, most decision-makers see hybrid environments as critical to their operational flexibility. 91% cited adaptability to fluctuating business needs as the top benefit of adopting multiple clouds, followed by improved app resiliency (68%) and cost efficiencies (59%). A hybrid approach is also reflected in deployment strategies for AI workloads, with 51% planning to use models across both cloud and on-premises environments for the foreseeable future. Significantly, 79% of organisations recently repatriated at least one application from the public cloud back to an on-premises or co-location environment, citing cost control, security concerns, and predictability. ... “While spreading applications across different environments and cloud providers can bring challenges, the benefits of being cloud-agnostic are too great to ignore. It has never been clearer that the hybrid approach to app deployment is here to stay,” said Cindy Borovick, Director of Market and Competitive Intelligence,


Trying to Scale With a Small Team? Here's How to Drive Growth Without Draining Your Resources

To be an effective entrepreneur or leader, communication is key, and being able to prioritize initiatives that directly align with the overall strategic vision ensures that your lean team is working on projects that have the greatest impact. Integrate key frameworks such as Responsible, Accountable, Consulted, and Informed (RACI) and Objectives and Key Results (OKRs) to maintain transparency, focus and measure progress. By focusing efforts on high-impact activities, your lean team can achieve high success and significant results without the unnecessary strain usually attributable to early-stage organizations. ... Many think that agile methodologies are only for the fast-moving software development industry — but in reality, the frameworks are powerful tools for lean teams in any industry. Encouraging the right culture is key where quick pivots, regular genuine feedback loops and leadership that promotes continuous improvement are part of the everyday workflows. This agile mindset, when adopted early, helps teams rapidly respond to market changes and client issues. ... Trusting others builds rapport. Assigning clear ownership of tasks while allowing those team members the autonomy to execute the strategies creatively and efficiently, while also allowing them to fail, is how trust is created.


Effecting Culture Changes in Product Teams

Depending on the organization, the responsibility of successfully leading a culture shift among the product team could fall to various individuals – the CPO, VP of product development, product manager, etc. But regardless of the specific title, to be an effective leader, you can’t assume you know all the answers. Start by having one-to-one conversations with numerous members on the product/engineering team. Ask for their input and understand, from their perspective, what is working, what’s not working, and what ideas they have for how to accelerate product release timelines. After conducting one-to-one discussions, sit down and correlate the information. Where are the common denominators? Did multiple team members make the same suggestions? Identify the roadblocks that are slowing down the product team or standing in the way of delivering incremental value on a more regular basis. In many cases, tech leaders will find that their team already knows how to fix the issue – they just need permission to do things a bit differently and adjust company policies/procedures to better support a more accelerated timeline. Talking one-on-one with team members also helps resolve any misunderstandings around why the pace of work must change as the company scales and accumulates more customers. Product engineers often have a clear vision of what the end product should entail, and they want to be able to deliver on that vision.


Microsoft Confirms Password Spraying Attack — What You Need To Know

The password spraying attack exploited a command line interface tool called AzureChecker to “download AES-encrypted data that when decrypted reveals the list of password spray targets,” the report said. It then, to add salt to the now open wound, accepted an accounts.txt file containing username and password combinations used for the attack, as input. “The threat actor then used the information from both files and posted the credentials to the target tenants for validation,” Microsoft explained. The successful attack enabled the Storm-1977 hackers to then leverage a guest account in order to create a compromised subscription resource group and, ultimately, more than 200 containers that were used for cryptomining. ... Passwords are no longer enough to keep us safe online. That’s the view of Chris Burton, head of professional services at Pentest People, who told me that “where possible, we should be using passkeys, they’re far more secure, even if adoption is still patchy.” Lorri Janssen-Anessi, director of external cyber assessments at BlueVoyant is no less adamant when it comes to going passwordless. ... And Brian Pontarelli, CEO of FusionAuth, said that the teams who are building the future of passwords are the same ones that are building and managing the login pages of their apps. “Some of them are getting rid of passwords entirely,” Pontarelli said


The secret weapon for transformation? Treating it like a merger

Like an IMO, a transformation office serves as the conductor — setting the tempo, aligning initiatives and resolving portfolio-level tensions before they turn into performance issues. It defines the “music” everyone should be playing: a unified vision for experience, business architecture, technology design and most importantly, change management. It also builds connective tissue. It doesn’t just write the blueprint — it stays close to initiative or project leads to ensure adherence, adapts when necessary and surfaces interdependencies that might otherwise go unnoticed. ... What makes the transformation office truly effective isn’t just the caliber of its domain leaders — it’s the steering committee of cross-functional VPs from core business units and corporate functions that provides strategic direction and enterprise-wide accountability. This group sets the course, breaks ties and ensures that transformation efforts reflect shared priorities rather than siloed agendas. Together, they co-develop and maintain a multi-year roadmap that articulates what capabilities the enterprise needs, when and in what sequence. Crucially, they’re empowered to make decisions that span the legacy seams of the organization — the gray areas where most transformations falter. In this way, the transformation office becomes more than connective tissue; it becomes an engine for enterprise decision-making.


Legacy Modernization: Architecting Real-Time Systems Around a Mainframe

When traffic spikes hit our web portal, those requests would flow through to the mainframe. Unlike cloud systems, mainframes can't elastically scale to handle sudden load increases. This created a bottleneck that could overload the mainframe, causing connection timeouts. As timeouts increased, the mainframe would crash, leading to complete service outages with a large blast radius, hundreds of other applications which depend on the mainframe would also be impacted. This is a perfect example of the problems with synchronous connections to the mainframes. When the mainframes could be overwhelmed by a highly elastic resource like the web, the result could be failure in datastores, and sometimes that failure could result in all consuming applications failing. ... Change Data Capture became the foundation of our new architecture. Instead of batch ETLs running a few times daily, CDC streamed data changes from the mainframes in near real-time. This created what we called a "system-of-reference" - not the authoritative source of truth (the mainframe remains "system-of-record"), but a continuously updated reflection of it. The system of reference is not a proxy of the system of record, which is why our website was still live when the mainframe went down.

Daily Tech Digest - April 29, 2025


Quote for the day:

"Don't let yesterday take up too much of today." -- Will Rogers



AI and Analytics in 2025 — 6 Trends Driving the Future

As AI becomes deeply embedded in enterprise operations and agentic capabilities are unlocked, concerns around data privacy, security and governance will take center stage. With emerging technologies evolving at speed, a mindset of continuous adaptation will be required to ensure requisite data privacy, combat cyber risks and successfully achieve digital resilience. As organizations expand their global footprint, understanding the implications of evolving AI regulations across regions will be crucial. While unifying data is essential for maximizing value, ensuring compliance with diverse regulatory frameworks is mandatory. A nuanced approach to regional regulations will be key for organizations navigating this dynamic landscape. ... As the technology landscape evolves, continuous learning becomes essential. Professionals must stay updated on the latest technologies while letting go of outdated practices. Tech talent responsible for building AI systems must be upskilled in evolving AI technologies. At the same time, employees across the organization need training to collaborate effectively with AI, ensuring seamless integration and success. Whether through internal upskilling or embarking on skills-focused partnerships, investment in talent management will prove crucial to winning the tech-talent gold rush and thriving in 2025 and beyond.


Generative AI is not replacing jobs or hurting wages at all, say economists

The researchers looked at the extent to which company investment in AI has contributed to worker adoption of AI tools, and also how chatbot adoption affected workplace processes. While firm-led investment in AI boosted the adoption of AI tools — saving time for 64 to 90 percent of users across the studied occupations — chatbots had a mixed impact on work quality and satisfaction. The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves." In other words, AI is creating new work that cancels out some potential time savings from using AI in the first place. "One very stark example that it's close to home for me is there are a lot of teachers who now say they spend time trying to detect whether their students are using ChatGPT to cheat on their homework," explained Humlum. He also observed that a lot of workers now say they're spending time reviewing the quality of AI output or writing prompts. Humlum argues that can be spun negatively, as a subtraction from potential productivity gains, or more positively, in the sense that automation tools historically have tended to generate more demand for workers in other tasks. "These new job tasks create new demand for workers, which may boost their wages, if these are more high value added tasks," he said.


Advancing Digital Systems for Inclusive Public Services

Uganda adopted the modular open-source identity platform, MOSIP, two years ago. A small team of 12, with limited technical expertise, began adapting the MOSIP platform to align with Uganda's Registration of Persons Act, gradually building internal capacity. By the time the system integrator was brought in, Uganda incorporated digital public good, DPG, into its legal framework, providing the integrator with a foundation to build upon. This early customization helped shape the legal and technical framework needed to scale the platform. But improvements are needed, particularly in the documentation of the DPG. "Standardization, information security and inclusion were central to our work with MOSIP," Kisembo said. "Consent became a critical focus and is now embedded across the platform, raising awareness about privacy and data protection." ... Nigeria, with a population of approximately 250 million, is taking steps to coordinate its previously fragmented digital systems through a national DPI framework. The country deployed multiple digital solutions over the last 10 to 15 years, which were often developed in silos by different ministries and private sector agencies. In 2023 and 2024, Nigeria developed a strategic framework to unify these systems and guide its DPI adoption. 


Eyes, ears, and now arms: IoT is alive

In just a few years, devices at home and work started including cameras to see and microphones to hear. Now, with new lines of vacuums and emerging humanoid robots, devices have appendages to manipulate the world around them. They’re not only able to collect information about their environment but can touch, “feel”, and move it. ... But, knowing the history of smart devices getting hacked, there’s cause for concern. From compromised baby monitors to open video doorbell feeds, bad actors have exploited default passwords and unencrypted communications for years. And now, beyond seeing and hearing, we’re on the verge of letting devices roam around our homes and offices with literal arms. What’s stopping a hacked robot vacuum from tampering with security systems? Or your humanoid helper from opening the front door? ... If developers want robots to become a reality, they need to create confidence in these systems immediately. This means following best practice cybersecurity by enabling peer-to-peer connectivity, outlawing generic credentials, and supporting software throughout the device lifecycle. Likewise, users can more safely participate in the robot revolution by segmenting their home networks, implementing multi-factor authentication, and regularly reviewing device permissions.


How to Launch a Freelance Software Development Career

Finding freelance work can be challenging in many fields, but it tends to be especially difficult for software developers. One reason is that many software development projects do not lend themselves well to a freelancing model because they require a lot of ongoing communication and maintenance. This means that, to freelance successfully as a developer, you'll need to seek out gigs that are sufficiently well-defined and finite in scope that you can complete within a predictable period of time. ... Specifically, you need to envision yourself also as a project manager, a finance director, and an accountant. When you can do these things, it becomes easier not just to freelance profitably, but also to convince prospective clients that you know what you're doing and that they can trust you to complete projects with quality and on time. ... While creating a portfolio may seem obvious enough, one pitfall that new freelancers sometimes run into is being unable to share work due to nondisclosure agreements they sign with clients. When negotiating contracts, avoid this risk by ensuring that you'll retain the right to share any key aspects of a project for the purpose of promoting your own services. Even if clients won't agree to letting you share source code, they'll often at least allow you to show off the end product and discuss at a high level how you approached and completed a project.


Digital twins critical for digital transformation to fly in aerospace

Among the key conclusions were that there was a critical need to examine the standards that currently support the development of digital twins, identify gaps in the governance landscape, and establish expectations for the future. ... The net result will be that stakeholder needs and objectives become more achievable, resulting in affordable solutions that shorten test, demonstration, certification and verification, thereby decreasing lifecycle cost while increasing product performance and availability. Yet the DTC cautioned that cyber security considerations within a digital twin and across its external interfaces must be customisable to suit the environment and risk tolerance of digital twin owners. ... First, the DTC said that evidence suggests a necessity to examine the standards that currently support digital twins, identify gaps in the governance landscape, and set expectations for future standard development. In addition, the research team identified that standardisation challenges exist when developing, integrating and maintaining digital twins during design, production and sustainment. There was also a critical need to identify and manage requirements that support interoperability between digital twins throughout the lifecycle. This recommendation also applied to the more complex SoS Digital Twins development initiatives. Digital twin model calibration needs to be an automated process and should be applicable to dynamically varying model parameters.


Quality begins with planning: Building software with the right mindset

Too often, quality is seen as the responsibility of QA engineers. Developers write the code, QA tests it, and ops teams deploy it. But in high-performing teams, that model no longer works. Quality isn’t one team’s job; it’s everyone’s job. Architects defining system components, developers writing code, product managers defining features, and release managers planning deployments all contribute to delivering a reliable product. When quality is owned by the entire team, testing becomes a collaborative effort. Developers write testable code and contribute to test plans. Product managers clarify edge cases during requirements gathering. Ops engineers prepare for rollback scenarios. This collective approach ensures that no aspect of quality is left to chance. ... One of the biggest causes of software failure isn’t building the wrong way, it’s building the wrong thing. You can write perfectly clean, well-tested code that works exactly as intended and still fail your users if the feature doesn’t solve the right problem. That’s why testing must start with validating the requirements themselves. Do they align with business goals? Are they technically feasible? Have we considered the downstream impact on other systems or components? Have we defined what success looks like?


What Makes You a Unicorn in Your Industry? Start by Mastering These 4 Pillars

First, you have to have the capacity, the skill, to excel in that area. Additionally, you have to learn how to leverage that standout aspect to make it work for you in the marketplace - incorporating it into your branding, spotlighting it in your messaging, maybe even including it in your name. Concise as the notion is, there's actually a lot of breadth and flexibility in it, for when it comes to selecting what you want to do better than anyone else is doing it, your choices are boundless. ... Consumers have gotten quite savvy at sniffing out false sincerity, so when they come across the real thing, they're much more prone to give you their business. Basically, when your client base believes you prioritize your vision, your team and creating an incredible product or service over financial gain, they want to work with you. ... Building and maintaining a remarkable "company culture" can just be a buzzword to you, or you can bring it to life. I can't think of any single factor that makes my company more valuable to my clients than the value I place on my people and the experience I endeavor to provide them by working for me. When my staff feels openly recognized, wholly supported and vitally important to achieving our shared outcomes, we're truly unstoppable. So keep in mind that your unicorn focus can be internal, not necessarily client-facing.



Conquering the costs and complexity of cloud, Kubernetes, and AI

While IT leaders clearly see the value in platform teams—nine in 10 organizations have a defined platform engineering team—there’s a clear disconnect between recognizing their importance and enabling their success. This gap signals major stumbling blocks ahead that risk derailing platform team initiatives if not addressed early and strategically. For example, platform teams find themselves burdened by constant manual monitoring, limited visibility into expenses, and a lack of standardization across environments. These challenges are only amplified by the introduction of new and complex AI projects. ... Platform teams that manually juggle cost monitoring across cloud, Kubernetes, and AI initiatives find themselves stretched thin and trapped in a tactical loop of managing complex multi-cluster Kubernetes environments. This prevents them from driving strategic initiatives that could actually transform their organizations’ capabilities. These challenges reflect the overall complexity of modern cloud, Kubernetes, and AI environments. While platform teams are chartered with providing infrastructure and tools necessary to empower efficient development, many resort to short-term patchwork solutions without a cohesive strategy. 


Reporting lines: Could separating from IT help CISOs?

CFOs may be primarily concerned with the financial performance of the business, but they also play a key role in managing organizational risk. This is where CISOs can learn the tradecraft in translating technical measures into business risk management. ... “A CFO comes through the finance ranks without a lot of exposure to IT and I can see how they’re incentivized to hit targets and forecasts, rather than thinking: if I spend another two million on cyber risk mitigation, I may save 20 million in three years’ time because an incident was prevented,” says Schat. Budgeting and forecasting cycles can be a mystery to CISOs, who may engage with the CFO infrequently, and interactions are mostly transactional around budget sign-off on cybersecurity initiatives, according to Gartner. ... It’s not uncommon for CISOs to find security seen as a barrier, where the benefits aren’t always obvious, and are actually at odds with the metrics that drive the CIO. “Security might slow down a project, introduce a layer of complexity that we need from a security perspective, but it doesn’t obviously help the customer,” says Bennett. Reporting to CFOs can relieve potential conflicts of interest. It can allow CISOs to broaden their involvement across all areas of the organization, beyond input in technology, because security and managing risk is a whole-of-business mission.