Daily Tech Digest - January 26, 2025


Quote for the day:

“If you don’t try at anything, you can’t fail… it takes back bone to lead the life you want” -- Richard Yates

Here’s Why Physical AI Is Rapidly Gaining Ground And Lauded As The Next AI Big Breakthrough

If we are going to connect generative AI to all kinds of robots and other machines that are wandering around in our homes, offices, factories, streets, and the like, we ought to expect that the AI will do so properly, safely, and with aplomb. Can an AI that only has text-based data training adequately control and direct those real-world machines as they mix among people? Some assert that this is a highly dangerous concern. The generative AI uses ostensibly book learning to guess what will happen when a robot is instructed by the AI to lift a chair or hold aloft a dog. Is that good enough to cope with the myriad of aspects that can go wrong? Perhaps the AI will by text-basis logic assume that if the dog is dropped, it will bounce like a rubber ball. Ouch, the dog might not be amused. ... AI researchers are scurrying to craft Physical AI. The future depends on this capability. Machines and robots are going to be built and shipped to work side-by-side with humans. Physical AI will be the make-or-break of whether those mechanizations are compatible with humans and operate properly in the real world or instead are endangering and harmful.


Why workload repatriation must be part of true multi-cloud strategies

Repatriation can provide benefits such as cost optimization and enhanced control, but it also introduces significant challenges. Key obstacles organizations encounter during cloud repatriation include the absence of cloud-native services, limited access to provider-managed applications, the need for highly skilled professionals, and potentially substantial capital expenditures required for building or upgrading on-premises infrastructure. Migrating workloads back on-premises often results in the development of hybrid environments or, in cases where multiple public cloud providers are used, multi-cloud environments. This shift adds complexity to managing IT infrastructure, requiring greater coordination and expertise. In public cloud environments, providers offer a wide array of managed services, automated management, and orchestration capabilities that simplify operations and reduce the burden on IT teams. When repatriating workloads, organizations must find alternatives or develop in-house solutions to replicate these functionalities. This can be time-consuming, costly, and may result in reduced capabilities compared to cloud-native offerings. As such, organizations must carefully balance the trade-offs between the advanced capabilities of cloud-native solutions and the control offered by on-premises environments. 


3 hidden benefits of Dedicated Internet Access for enterprises

DIA is designed to support bandwidth-heavy tasks such as cloud-based applications and video conferencing. It ensures seamless connectivity, helping streamline operations and prevent performance issues. Routine activities like large file sharing, backups, and data transfers are completed more efficiently, while internal communication across multiple business locations becomes smoother and more reliable. Think of DIA as your business’s private Internet highway. Unlike shared connections, it provides uninterrupted service, essential for maintaining optimal workflows and boosting productivity. For companies that rely on consistent and high-performance Internet access, DIA offers a dependable solution tailored to meet these demands. ... Fast website loading times and smooth online transactions are essential for satisfying customers. DIA helps businesses deliver a premium online experience, which can significantly improve customer loyalty. This reliable performance extends to all business locations, including branch offices. With DIA, businesses can ensure consistent, high-quality interactions with their customers—whether accessing resources or reaching out through support channels. Additionally, DIA enhances customer support by ensuring messaging services remain continuously available, allowing businesses to respond quickly and efficiently to customer needs.


Data engineering - Pryon: Turning chaos into clarity

Data Engineering is the discipline that takes raw, unstructured data and transforms it into actionable, high-value insights. Without a strong data foundation, the $10M average that 1 in 3 enterprises are spending on AI projects next year alone, are setting themselves up for failure. As data creation accelerates – 90% of the world’s data has been generated in the last two years – engineers are tasked with more than just managing it. They have to structure, organise and operationalise data so it can actually be useful and produce the right outputs. From building reliable pipelines to ensuring data quality, engineering teams play the central role in making systems that actually solve problems. ... Data synthesis is interesting, but taking action is paramount. The final step is putting it to work. Whether that means automating workflows, making real-time decisions, or delivering predictive insights, this is where the rubber meets the road. Agentic orchestration can enable systems to take the synthesised insights and act on them autonomously or with minimal human input. These engines bridge the gap between theory and practice, ensuring that your data doesn’t just sit idle – it drives measurable outcomes.


Leading with purpose: Insights from the Bhagavad Gita for modern managers

In a professional setting, the ability to manage emotions is crucial for success. A manager or individual who seeks gratification of ego and cannot regulate their emotions is likely to face challenges in achieving results. Actions driven by a sense of false ego can lead to conflicts, and misunderstandings, and ultimately hinder productivity. Such individuals may react impulsively rather than thoughtfully, allowing their emotions to cloud their judgment. When individuals learn to regulate their emotions and act from a place of calmness rather than chaos, they not only enhance their performance but also uplift those around them. A Sattvic approach to work fosters collaboration, creativity, and a shared sense of purpose. Conversely, when actions are driven by ego or excessive ambition (Tamsik), they often lead to stress and burnout. By embodying the teachings of the Gita—performing duties with dedication while remaining unattached to outcomes—individuals can achieve true mastery over their emotions. This mastery not only paves the way for personal success but also cultivates an environment where everyone can thrive together. While the entire Bhagavad Gita is replete with invaluable life lessons, these two shlokas stand out as particularly essential for effective management in the workplace. 


Accelerating HCM Cloud Implementation With RPA

Robotic Process Automation (RPA) provides a practical solution to streamline these processes. ... Many cloud platforms require Multi-Factor Authentication (MFA), which disrupts standard login routines for bots. However, we have addressed this by programmatically enabling RPA bots to handle MFA through integration with SMS or email-based OTP services. This allows seamless automation of login processes, even with additional security layers. ... It’s essential that users are assigned the correct authorizations in an HCM cloud, with ongoing maintenance of these permissions as individuals transition within the organization. Even with a well-defined scheme in place, it’s easy for someone to be shifted into a role that they shouldn’t hold. To address this challenge, we have leveraged RPA to automate the assignment of roles, ensuring adherence to least-privilege access models. ... Integrating with HCM systems through APIs often involves navigating rate limits that can disrupt workflows. To address this challenge, we implemented robust retry logic within our RPA bots, utilizing exponential backoff to gracefully handle API rate limit errors. This approach not only minimizes disruptions but also ensures that critical operations continue smoothly.


MDM and genAI: A match made in Heaven — or something less?

Despite its promising potential, AIoT faces several hurdles. One major challenge is interoperability. Many companies use IIoT devices and platforms from different manufacturers, which are not always seamlessly compatible. This complicates the implementation of integrated AIoT solutions and necessitates standardised interfaces and protocols. IIoT platforms such as Cumulocity can integrate various services and devices. A well-chosen platform facilitates the integration of new devices, enables easy scaling, and supports the flexible adaptation of an IIoT strategy. It also allows integration with other systems and technologies, such as ERP or CRM systems, thereby embedding IIoT technologies into existing business processes. Moreover, robust platforms offer specialised security features to protect connected devices from potential cybercriminal attacks. Another critical aspect is data preparation. In IoT environments, data quality is often poorer than businesses assume. Applying AI to inadequately prepared data produces subpar models that fail to deliver expected results. ... A further challenge is the skills shortage. Developing and implementing AIoT systems requires expertise in fields such as data analysis, machine learning, and cybersecurity. The demand for skilled professionals exceeds current supply, prompting companies to invest in training and development programmes.


Enterprise Architecture and Complexity

Complex architectures are characterised by attributes that make it challenging to manage using traditional project or program management methods. These architectures often have many layers, interconnected parts, variables, and dynamics that are not immediately apparent or easily understood. Complex architectures are also unpredictable (Theiss 2023)2 due to the communication and interaction required across and between the components. Managing an architecture build and deployment requires both broad and deep understanding of the interdependencies, interactions, and inherent constraints. As increasing levels of automation are deployed at scale, greater visibility and transparency is needed to understand not only the technologies and applications in play, but also the intended and unintended consequences and behaviour that they generate. Architectural artefacts and systems documentation (even if up to date) typically show elements such as nested operational processes as simple, generalised linkages and design patterns which results in greater levels of ambiguity, not clarity. They only allow us to understand in part. As systems architectures become more complex in build, capability and scope, enhanced sense-making capabilities are needed to navigate components, to ensure a coherent, adaptive systems design. 


Misinformation Is No. 1 Global Risk, Cyberespionage in Top 5

Misinformation campaigns in the form of deepfakes, synthetic voice recordings or fabricated news stories are now a leading mechanism for foreign entities to influence "voter intentions, sow doubt among the general public about what is happening in conflict zones, or tarnish the image of products or services from another country." This is especially acute in India, Germany, Brazil and the United States. Concern remains especially high following a year of the so-called "super elections," which saw heightened state-sponsored campaigns designed to manipulate public opinion.  ... Despite growing concerns, cyber resilience continues to be inadequate especially among small and mid-sized organizations, according to the report's findings. Thirty-five percent of small organizations believe their cyber resilience is inadequate, up from 5% in 2022. Many of these organizations lack the resources to invest in advanced cybersecurity measures, leaving them increasingly vulnerable to ransomware, phishing and other attacks. Seventy-one percent of cyber leaders say small organizations have already reached a "tipping point where they can no longer adequately secure themselves against the growing complexity of cyber risks." ... On one hand, AI-powered systems are proving invaluable in identifying threats, automating responses and analyzing vast amounts of data in real time.


Cloud repatriation – how to balance repatriation effectively and securely

Regardless of the reasons for making the move away from public cloud, the road to repatriation can be complex to navigate. Whether it is technical or talent issues, financial costs or compliance challenges, businesses making the switch should be prepared to spend time planning and executing an effective strategy. Within this strategy there are three areas that require special attention: observability, compliance and employing a holistic tech stack strategy. Observability is crucial in cloud repatriation because in order to move data and applications in-house, a business must understand them and how they are being used. It is only then you can ensure a smooth and effective transition. For example, there might be Shadow IT or AI that is being used by employees to get around IT policy and help them to get their work done faster. Sometimes these technologies will store data on a cloud service, so businesses need to be aware of them before making the switch. By leveraging observability, organizations can mitigate risks, optimize their infrastructure, and achieve successful repatriation that meets their strategic objectives. Compliance is also important as it is a major focus area for European and UK regulators with new and emerging regulations like DORA and NIS2 coming to the fore.


Daily Tech Digest - January 25, 2025


Quote for the day:

“You live longer once you realize that any time spent being unhappy is wasted.” -- Ruth E. Renkl


How to Prepare for Life After NB-IoT

Last November, the IoT world was caught off guard by AT&T’s announcement to discontinue its support for Narrowband IoT (NB-IoT) by Q1 2025. For many, this came as a big surprise. NB-IoT was considered the prodigy technology, promising low power, long-range, and low-cost connectivity. While NB-IoT never reached mass adoption in the US, this decision still struck as a blow for the ones who did invest in the technology, and raised concerns about its validity among people outside the US. ... Fortunately, most IoT modules supporting NB-IoT also support LTE-M. Modules typically select the optimal network and technology based on signal quality and internal radio settings. Devices with roaming enabled will automatically switch networks or technologies if the primary connection fails. Once AT&T shuts down its network, your devices will automatically switch to another technology or network if set up correctly. However, rather than waiting for the network to become unavailable, you may want to stay in control of transitioning to another technology. This also allows you to test the process with a subset of devices before rolling out updates to your entire fleet. Assuming your cellular modules support LTE-M, and you have remote access to update configurations, you can update the radio access technology (RAT) using a simple AT Command. 
Creating efficient, relevant, and lasting regulations requires several key factors. First and foremost, policymakers need a working definition of the object of their laws, which requires thorough work to capture the essence of what will be affected by their text. This is a challenging task in the case of AI because its definition remains in flux as the technology evolves. ... unprecedented surge in generative AI’s popularity created uncertainty for policymakers about how to navigate the new landscape. There was an urgent need for frameworks, definitions, and language to fully understand the impact of this technology and how to frame it. As the technology outpaced expectations, earlier regulatory efforts to address these tools quickly became inadequate and obsolete, leaving policymakers scrambling to catch up. This is precisely the situation Chinese regulators faced in their initial efforts to address the generative AI sector. The basic provisions outlined in the law were insufficient to address the profound societal impacts of generative AI’s widespread adoption. The attempt to establish China as an early player in AI regulation was overtaken by the pace of technological progress and private-sector innovation, rendering even the terminology obsolete.


How to Simplify Automated Security Testing in CI/CD Pipelines

Dependency management is where many teams stumble, and we’ve all seen the fallout from poorly managed libraries (hello, Log4Shell). Automating dependency checks is not an option, it’s a must. Tools like Dependabot, OWASP Dependency-Check, and Renovate take the grunt work out of monitoring for vulnerabilities, raising alerts, and even creating pull requests to fix issues. Imagine a Node.js team drowning in a sea of outdated packages. With Dependabot hooked into their GitHub workflow, every vulnerability gets an automatic pull request to update to a safe version. No manual labor, no guessing games—just a steady rhythm of secure, up-to-date code. Go deeper by leveraging Software Composition Analysis (SCA) tools that don’t just look at direct dependencies but dive into the murky waters of transitive dependencies too.  ... Instead of vague warnings like “Potential SQL Injection Found,” imagine getting, “SQL Injection vulnerability detected at line 45 in user_controller.js. Here’s how to fix it…” Tools like CodeQL and Semgrep do precisely this. They integrate directly into CI pipelines, flag issues, suggest fixes, and provide links to further reading, all without overwhelming the dev team.


Security automation and integration can smooth AppSec friction

Whether you perceive friction between development and security testing to be an impediment or not often depends on your role in the organization. Of the AppSec team members who responded to the survey used for “Global State of DevSecOps” report, 65% felt that that testing impeded pipelines “moderately” or “severely.” While the report didn’t survey why they feel this way, we can speculate that it’s due to their proximity to the testing process, or potentially because they’re feeling pressure to accelerate review processes. Since they are closest to the task, they face the highest scrutiny for its efficiency. Of the development and engineering team members who replied to the survey, 58% share the sentiment of their AppSec counterparts. It is, however, important to consider that an additional 12% of the surveyed developers and engineers report that they just don’t have enough visibility into security testing to know what’s going on. Were they to have greater visibility into security testing processes, it is quite possible that they, too, would perceive AppSec testing as an impediment to pipelines. And this lack of visibility makes concerted DevSecOps initiatives more difficult to implement since contributors are unable to close feedback loops or optimize development and testing efforts.


The Power of Many: Crowdsourcing as A Game-Changer for Modern Cyber Defense

Although shared expertise significantly boosts threat detection & hunting efficiency while simultaneously empowering cybersecurity education, there are several stumbling blocks to address on the way to building global crowdsourcing initiatives. While working towards a safer future, contributors to crowdsourced efforts often face issues related to intellectual property rights and the recognition of the significance of individual contributions within the professional network. Ensuring proper recognition for discoveries and contributions to global cyber defense at all levels, from the support of author attribution in the code of a detection rule to sharable digital credentials issued by organizations to recognize exceptional individual involvement and contributions to the crowdsourcing initiatives, is essential to maintaining motivation and fairness. Another challenge is adherence to privacy imperative and compliance with security regulations, including TLP protocol, while sharing information with a wide audience, since disclosure of sensitive information about vulnerabilities or cyber attacks can pose significant risks both to crowdsourcing program contributors and beneficiaries.


How to Harness the Power of Fear and Transform It Into a Leadership Strength

One of the most powerful ways to address fear is to reframe it as a perception rather than an absolute truth. Fear does not objectify threats; it is just one of the mental faculties. By reframing it as a perception, a leader can make proper decisions, attacking the instances of fear. Refocusing the process does not stop fear; it changes the process. Leaders are able to go from impulsive to composed behavior by understanding that fear is a conceptual state rather than an actual one. Calm neurotransmitters like serotonin and endorphins take the role of stress chemicals like cortisol and adrenaline, promoting emotional equilibrium and resilience. For leaders, that shift can be radical. By approaching challenges and approach with strength and rationality, not fear, they can spread the ripple effect into their companies. It's a way of creating an environment in which teams feel empowered, excited and pushed to grow and thrive. ... Recognizing fear as a perceived threat allows leaders to respond with reason and confidence. Mastering fear is a critical leadership skill, fostering innovation and collaboration. By transforming fear into a tool for growth, leaders unlock their full potential and inspire others, paving the way for sustained progress.


Nuclear-Powered Data Centers: When Will SMRs Finally Take Off?

Taking stock of the nuclear-powered data center market in 2024, Alan Howard, principal analyst of cloud and colocation services at Omdia,* said: “It’s nothing short of exciting that Amazon, Google, and Microsoft have all signed deals for nuclear power… and Meta is publicly on the hunt.” Still, these deals are relatively small by the standards of the data center industry, and Howard cautioned against impatience, citing the mid-2030s as the earliest we can expect to see broad commercial availability of nuclear energy in powering data centers. “The reality is that these [nuclear reactors under construction] are essentially test reactors which is part of the long regulatory road nuclear technology companies must follow.” ... One of the chief challenges facing data center companies is the five-to-seven-year permitting and construction timelines for nuclear facilities, according to Ryan Mallory, COO at data center firm Flexential. “Data center companies must begin securing permits, ground space, and operational expertise to prepare for SMRs to become scalable and repeatable by the 2030s,” Mallory said. There are also technological challenges, according to Steven Carlini, chief data center and AI advocate at Schneider Electric. “Integrating SMRs into the existing ecosystem will be complex,” he said.


13 Cybersecurity Predictions for 2025

AI capabilities are awesome, yet I’m finding that most of the AI capabilities being developed are focused on just getting them to work and into the marketplace as soon as possible. We need to do a much better job of incorporating cybersecurity best practices and secure-by-design principles into the creation, operation, and sustainment of AI systems. The AI Security and Incident Response Team (AISIRT)[ii] here at the Software Engineering Institute has discovered numerous material weaknesses and flaws in AI capabilities resulting in vulnerabilities that can be leveraged by hostile entities. AI vulnerabilities are cyber vulnerabilities, and the list of reported vulnerabilities continue to grow. Software engineers are trained to incorporate secure-by-design principles into their work. But neural-network models, including generative AI and LLMs, bring along a wide range of additional kinds of weaknesses and vulnerabilities, and for many of these it is a struggle to develop effective remediations. Until the AI community is able to develop AI-appropriate secure-by-design best practices to augment the secure-by-design practices already familiar to software engineers, I believe we’ll see preventable cyber incidents affecting AI capabilities in 2025. ... Ransomware criminal activity continues to feast on the cyber poor. Cyber criminals have been feasting on those who operate below the cyber poverty line.


Biometrics Institute identifies dire need for clear language in biometrics and AI

Most biometrics experts agree that no one is exactly sure what anyone is talking about. The Biometrics Institute is trying to help, via its Explanatory Dictionary, a resource that aims to capture the nuances in biometric terminology, “considering both formal definitions and how they are perceived by the public – for example, how someone might explain biometrics or AI to a friend.” Because, as of now, there isn’t a standard that is universally agreed-on, nor is there really a clear way to explain biometrics and AI to your neighbour Ted who works in marketing. “There are no universal definitions of biometrics or AI and those put forward by ISO and some governments are either too technical, obtuse or are not fully aligned with one another or are hidden behind paywalls and not accessible to the majority of the general public.” The paper drills down on the semantics of biometric grammar. What does it mean for a biometric application to “have AI”? Conflation of certain terms in both regulatory and public contexts exacerbates the problem. Media struggles to pick apart the web of language, and contributes its own strands in the process. Is a tool “AI-driven,” or “AI-equipped”? Where do algorithms fit in?


How AI Copilots Are Transforming Threat Detection and Response

The rise of AI copilots in cybersecurity is a transformative moment, but it requires a shift in mindset. Security teams should view these tools as partners, not replacements. AI copilots excel at processing vast datasets and identifying patterns, but humans are irreplaceable when it comes to judgment and understanding context. The future of cybersecurity lies in this hybrid approach, where AI enhances human capabilities rather than attempting to replicate them. Business leaders should focus on fostering this collaboration, equipping their teams with the skills and tools needed to work effectively with AI. Additionally, transparency is non-negotiable. Teams must understand how their AI copilots make decisions, ensuring accountability and reducing the risk of errors. This also involves rigorous testing and ongoing monitoring to detect and mitigate biases or vulnerabilities before they can be exploited. ... By empowering security teams with advanced capabilities, businesses can stay ahead of adversaries and secure a resilient future. Looking ahead, AI copilots are just the beginning. As these tools become more advanced, they will evolve beyond copilots into more autonomous AI agents—a shift often referred to as agentic AI. 


Daily Tech Digest - January 24, 2025


Quote for the day:

"Leaders are people who believe so passionately that they can seduce other people into sharing their dream." -- Warren G. Bennis


What comes after Design thinking

The first and most obvious one is that we can no longer afford to design things solely for humans. We clearly need to think in non-human, non-monocentric terms if we want to achieve real, positive, long-term impact. Second, HCD fell short in making its practitioners think in systems and leverage the power of relationships to really be able to understand and redesign what has not been serving us or our planet. Lastly, while HCD accomplished great feats in designing better products and services that solve today’s challenges, it fell short in broadening horizons so that these products and systems could pave the way for regenerative systems: the ones that go beyond sustainability and actively restore and revitalize ecosystems, communities, and resources create lasting, positive impact. Now, everything that we put out in the world needs to have an answer to how it is contributing to a regenerative future. And in order to build a regenerative future, we need to start prioritizing something that is integral to nature: relationships. We need to grow relational capacity, from designing for better interpersonal relationships to establishing systems that facilitate cross-organizational collaboration. We need to think about relational networks and harness their power to recreate more just, trustful, and better functioning systems. We need to think in communities.


FinOps automation: Raising the bar on lowering cloud costs

Successful FinOps automation requires strategies that exploit efficiencies from every angle of cloud optimization. Good data management, negotiations, data manipulation capabilities, and cloud cost distribution strategies are critical to automating cost-effective solutions to minimize cloud spend. This article focuses on how expert FinOps leaders have focused their automation efforts to achieve the greatest benefits. ... Effective automation relies on well-structured data. Intuit and Roku have demonstrated the importance of robust data management strategies, focusing on AWS accounts and Kubernetes cost allocation. Good data engineering enables transparency, visibility, and accurate budgeting and forecasting. ... Automation efforts should focus on areas with the highest potential for cost savings, such as prepayment optimization and waste reduction. Intuit and Roku have achieved significant savings by targeting these high-cost areas. ... Automation tools should be accessible and user-friendly for engineers managing cloud resources. Intuit and Roku have developed tools that simplify resource management and align costs with responsible teams. Automated reporting and forecasting tools help engineers make informed decisions.


Why CISOs Must Think Clearly Amid Regulatory Chaos

At their core, CISOs are truth sayers — akin to an internal audit committee that assesses risks and makes recommendations to improve an organization's defenses and internal controls. Ultimately, though, it's the board and a company's top executives who set policy and decide what to disclose in public filings. CISOs can and should be a counselor for this group effort because they have the understanding of security risk. And yet, the advice they can offer is limited if they don't have full visibility into an organization's technology stack. "Many oversee a company's IT system, but not the products the company sells. That's crucial when it comes to data-dependent systems and devices that can provide network-access targets to cyber criminals. Those might include medical devices, or sensors and other Internet of Things endpoints used in manufacturing lines, electric grids, and other critical physical infrastructure. In short: A company's defenses are only as strong as the board and its top executives allow it to be. And if there is a breach, as in the case of SolarWinds? CISOs do not determine the materiality of a cybersecurity incident; a company's top executives and its board make that call. The CISO's responsibilities in that scenario involves responding to the incident and conducting the follow-up forensics required to help minimize or avoid future incidents.


Building Secure Multi-Cloud Architectures: A Framework for Modern Enterprise Applications

The technical controls alone cannot secure multi-cloud environments. Organizations must conduct cloud security architecture reviews before implementing any multi-cloud solution. These reviews should focus on: Data flow patterns between clouds Authentication and authorization requirements Compliance obligations across all relevant jurisdictions. Completing these tasks thoroughly and diligently will ensure that multi-cloud security is baked into the architectural layer between the clouds and in the clouds themselves. While thorough architecture reviews establish the foundation, automation brings these security principles to life at scale. Automation provides a major advantage to security operations for multi-cloud environments. By treating infrastructure and security as code, organizations can achieve consistent configurations across clouds, implement automated security testing and enable fast response to security events. This helps with the overall security and operational overhead because it allows us to do more with less and to reduce human error. Our security operations experienced a substantial enhancement when we moved to automated compliance checks. Still, we did not just throw AWS services at the problem. We engaged our security team deeply in the process. 


Scaling Dynamic Application Security Testing (DAST)

One solution is to monitor requests sent to the target web server and extrapolate an OpenAPI Specification based on those requests in real-time. This monitoring could be performed client-side, server-side, or in-between on an API gateway, load-balancer, etc. This is a scalable, automatable solution that does not require each developer’s involvement. Depending on how long it runs, this approach can be limited in comprehensively identifying all web endpoints. For example, if no users called the /logout endpoint, then the /logout endpoint would not be included in the automatically generated OpenAPI Specification. Another solution is to statically analyze the source code for a web service and generate an OpenAPI Specification based on defined API endpoint routes that the automation can gleam from the source code. Microsoft internally prototyped this solution and found it to be non-trivial to reliably discover all API endpoint routes and all parameters by parsing abstract syntax trees without access to a working build environment. This solution was also unable to handle scenarios of dynamically registered API route endpoint handlers. ... To truly scale DAST for thousands of web services, we need to automatically, comprehensively, and deterministically generate OpenAPI Specifications.


Post-Quantum Cryptography 2025: The Enterprise Readiness Gap

"Quantum technology offers a revolutionary approach to cybersecurity, providing businesses with advanced tools to counter emerging threats," said David Close, chief solutions architect at Futurex. By using quantum machine learning algorithms, organizations can detect threats faster and more accurately. These algorithms identify subtle patterns that indicate multi-vector cyberattacks, enabling proactive responses to potential breaches. Innovations such as quantum key distribution and quantum random number generators enable unbreakable encryption and real-time anomaly detection, making them indispensable in fraud prevention and secure communications, Close said. These technologies not only protect sensitive data but also ensure the integrity of financial transactions and authentication protocols. A cornerstone of quantum security is post-quantum cryptography, PQC. Unlike traditional cryptographic methods, PQC algorithms are designed to withstand attacks from quantum computers. Standards recently established by the National Institute of Standards and Technology include algorithms such as Kyber, Dilithium and SPHINCS+, which promise robust protection against future quantum threats.


Tricking the bad guys: realism and robustness are crucial to deception operations

The goal of deception technology, also known as deception techniques, operations, or tools, is to create an environment that attracts and deceives adversaries to divert them from targeting the organization’s crown jewels. Rapid7 defines deception technology as “a category of incident detection and response technology that helps security teams detect, analyze, and defend against advanced threats by enticing attackers to interact with false IT assets deployed within your network.” Most cybersecurity professionals are familiar with the current most common application of deception technology, honeypots, which are computer systems sacrificed to attract malicious actors. But experts say honeypots are merely decoys deployed as part of what should be more overarching efforts to invite shrewd and easily angered adversaries to buy elaborate deceptions. Companies selling honeypots “may not be thinking about what it takes to develop, enact, and roll out an actual deception operation,” Handorf said. “As I stressed, you have to know your infrastructure. You have to have a handle on your inventory, the log analysis in your case. But you also have to think that a deception operation is not a honeypot. It is more than a honeypot. It is a strategy that you have to think about and implement very decisively and with willful intent.”


Effective Techniques to Refocus on Security Posture

If you work in software development, then “technical debt” is a term that likely triggers strong reactions. Foundationally, technical debt serves a similar function to financial debt. When well-managed, both can be used as leverage for further growth opportunities. In the context of engineering, technical debt can help expand product offerings and operations, helping a business grow faster than paying the debt with the opportunities offered from the leverage. On the other hand, debt also comes with risks and the rate of exposure is variable, dependent on circumstance. In the context of security, acceptance of technical debt from End of Life (EoL) software and risky decisions enable threats whose greatest advantage is time, the exact resource that debt leverages. ... The trustworthiness of software is dependent on the exploitable attack surface. Part of that attack surface are exploitable vulnerabilities. If the outcome of the SBOM with a VEX attestation is a deeper understanding of those applicable and exploitable vulnerabilities, coupling that information with exploit predictive analysis like EPSS helps to bring valuable information to decision-making. This type of assessment allows for programmatic decision-making. It allows software suppliers to express risk in the context of their applications and empowers software consumers to escalate on problems worth solving.


Sustainability, grid demands, AI workloads will challenge data center growth in 2025

Uptime expects new and expanded data center developers will be asked to provide or store power to support grids. That means data centers will need to actively collaborate with utilities to manage grid demand and stability, potentially shedding load or using local power sources during peak times. Uptime forecasts that data center operators “running non-latency-sensitive workloads, such as specific AI training tasks, could be financially incentivized or mandated to reduce power use when required.” “The context for all of this is that the [power] grid, even if there were no data centers, would have a problem meeting demand over time. They’re having to invest at a rate that is historically off the charts. It’s not just data centers. It’s electric vehicles. It’s air conditioning. It’s carbonization. But obviously, they are also retiring coal plants and replacing them with renewable plants,” Uptime’s Lawrence explained. “These are much less stable, more intermittent. So, the grid has particular challenges.” ... According to Uptime, infrastructure requirements for next-generation AI will force operators to explore new power architectures, which will drive innovations in data center power delivery. As data centers need to handle much higher power densities, it will throw facilities off balance in terms of how the electrical infrastructure is designed and laid out. 


Is the Industrial Metaverse Transforming the E&U Industry?

One major benefit of the industrial metaverse is that it can monitor equipment issues and hazardous conditions in real time so that any fluctuations in the electrical grid are instantly detected. As they collect data and create simulations, digital twins can also function as proactive tools by predicting potential problems before they escalate. “You can see which components are in early stages of failure,” a Hitachi Energy spokesperson notes in this article. “You can see what the impact of failure is and what the time to failure is, so you’re able to make operational decisions, whether it’s a switching operation, deploying a crew, or scheduling an outage, whatever that looks like.” ... Digital twins also make it possible for operators to simulate and test operational changes in virtual environments before real-world implementation, reducing excessive costs. “While it will not totally replace on-site testing, it can significantly reduce physical testing, lower costs and contribute to an increased quality of the protection system,” Andrea Bonetti, a power system protection specialist at Megger, tells the Switzerland-based International Electrotechnical Commission. Shell is one of several energy providers that use digital twins to enhance operations, according to Digital Twin Insider. 


Daily Tech Digest - January 23, 2025


Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham


Cyber Insights 2025: APIs – The Threat Continues

APIs are easily written, often with low-code / no-code tools. They are often considered by the developer as unimportant in comparison to the apps they connect, and probably protected by the tools that protect the apps. Bad call. “API attacks will increase in 2025 due to this over-reliance on existing application security and API management tools, but also due to organizations dragging their heels when it comes to protecting APIs,” says James Sherlow, systems engineering director of EMEA at Cequence Security. “While there was plenty of motivation to roll out APIs to stand up new services and support revenue streams, the same incentives are not there when it comes to protecting them.” Meanwhile, attackers are becoming increasingly sophisticated in their attacks. “In contrast, threat actors are not resting on their laurels,” he continued. “It’s now not uncommon for them to use multi-faceted attacks that seek to evade detection and then dodge and feint when the attack is blocked, all the time waiting until the last minute to target their end goal.” In short, he says, “It’s not until the business is breached that it wakes up to the fact that API protection and application protection are not one and the same thing. Web Application Firewalls, Content Delivery Networks, and API Gateways do not adequately protect APIs.”


Box-Checking or Behavior-Changing? Training That Matters

The pressure to meet these requirements is intense, and when a company finds an “acceptable” solution, they too often just check the box knowing they are compliant and stick with that solution in perpetuity - whether it creates a more secure workplace and behavioral change or not. Training programs designed purely to meet regulations are rarely effective. These initiatives tend to rely on generic content that employees skim through and forget. Organizations may meet the legal standard, but they fail to address the root causes of risky behavior. ... To improve outcomes, training programs must connect with people on a more practical level. Tailoring the content to fit specific roles within the organization is one way to do this. The threats a finance team faces, for example, are different from those encountered by IT professionals, so their training should reflect those differences. When employees see the relevance of the material, they are more likely to engage with it. Professionals in security awareness roles can distinguish themselves by designing programs that meet these needs. Equally important is embracing the concept of continuous learning. Annual training sessions often fail to stick. Smaller, ongoing lessons delivered throughout the year help employees retain information and incorporate it into their daily routines. 


OpenAI opposes data deletion demand in India citing US legal constraints

OpenAI has informed the Delhi High Court that any directive requiring it to delete training data used for ChatGPT would conflict with its legal obligations under US law. The statement came in response to a copyright lawsuit filed by the Reuters-backed Indian news agency ANI, marking a pivotal development in one of the first major AI-related legal battles in India. ... This case mirrors global legal trends, as OpenAI faces similar lawsuits in the United States and beyond, including from major organizations like The New York Times. OpenAI maintains its position that it adheres to the “fair use” doctrine, leveraging publicly available data to train its AI systems without infringing intellectual property laws. In the case of Raw Story Media v. OpenAI, heard in the Southern District of New York, the plaintiffs accused OpenAI of violating the Digital Millennium Copyright Act (DMCA) by stripping copyright management information (CMI) from their articles before using them to train ChatGPT. ... In the ANI v OpenAI case, the Delhi High Court has framed four key issues for adjudication, including whether using copyrighted material for training AI models constitutes infringement and whether Indian courts have jurisdiction over a US-based company. Nath’s view aligns with broader concerns over how existing legal frameworks struggle to keep pace with AI advancements.


Defense strategies to counter escalating hybrid attacks

Threat actor profiling plays a pivotal role in uncovering hybrid operations by going beyond surface-level indicators and examining deeper contextual elements. Profiling involves a thorough analysis of the actor’s history, their strategic objectives, and their operational behaviors across campaigns. For example, understanding the geopolitical implications of a ransomware attack targeting a defense contractor can reveal espionage motives cloaked in financial crime. Profiling allows researchers to differentiate between purely financial motivations and state-sponsored objectives masked as criminal operations. Hybrid actors often leave “behavioral fingerprints” – unique combinations of techniques and infrastructure reuse – that, when analyzed within the context of their history, can expose their true intentions. ... Threat intelligence feeds enriched with historical data can help correlate real-time events with known threat actor profiles. Additionally, implementing deception techniques, such as industry-specific honeypots, can reveal operational objectives and distinguish between actors based on their response to decoys. ... Organizations must adapt by adopting a defense-in-depth strategy that combines proactive threat hunting, continuous monitoring, and incident response preparedness.


4 Cybersecurity Misconceptions to Leave Behind in 2025

Workers need to avoid falling into a false sense of security, and organizations must ensure that they are frequently updating advice and strategies to reduce the likelihood of their employees falling victim. In addition, we found that this confidence doesn’t necessarily translate into action. A notable portion of those surveyed (29%) admit that they don’t report suspicious messages even when they do identify a phishing scam, despite the presence of convenient reporting tools like “report phishing” buttons. ... Our second misconception stems from workers’ sense of helplessness. This kind of cyber apathy can become a dangerous self-fulfilling prophecy if left unaddressed. The key problem is that even if it’s true that information is already online, this isn’t equivalent to being directly under threat, and there are different levels of risk. It’s one thing knowing someone has your home address; knowing they have your front door key in their pocket is quite another. Even if it’s hard to keep all of your data hidden, that doesn’t mean it’s not worth taking steps to keep key information protected. While it can seem impossible to stay safe when so much personal data is publicly available, this should be the impetus to bolster cybersecurity practices, such as not including personal information in passwords.


Real datacenter emissions are a dirty secret

With legislation such as the EU's Corporate Sustainability Reporting Directive (CSRD) now in force, customers and resellers alike are expecting more detailed carbon emissions reporting across all three Scopes from suppliers and vendors, according to Canalys. This expectation of transparency is increasingly important in vendor selection processes because customers need their vendors to share specific numbers to quantify the environmental impact of their cloud usage. "AWS has continued to fall behind its competitors here by not providing Scope 3 emissions data via its Customer Carbon Footprint Tool, which is still unavailable," Caddy claimed. "This issue has frustrated sustainability-focused customers and partners alike for years now, but as companies prepare for CSRD disclosure, this lack of granular emissions disclosure from AWS can create compliance challenges for EU-based AWS customers." We asked Amazon why it doesn't break out the emissions data for AWS separately from its other operations, but while the company confirmed this is so, it declined to offer an explanation. Neither did Microsoft nor Google. In a statement, an AWS spokesperson told us: "We continue to publish a detailed, transparent report of our year-on-year progress decarbonizing our operations, including across our datacenters, in our Sustainability Report. 


5 hot network trends for 2025

AI will generate new levels of network traffic, new requirements for low latency, and new layers of complexity. The saving grace, for network operators, is AIOps – the use of AI to optimize and automate network processes. “The integration of artificial intelligence (AI) into IT operations (ITOps) is becoming indispensable,” says Forrester analyst Carlos Casanova. “AIOps provides real-time contextualization and insights across the IT estate, ensuring that network infrastructure operates at peak efficiency in serving business needs.” ... AIOps can deliver proactive issue resolution, it plays a crucial role in embedding zero trust into networks by detecting and mitigating threats in real time, and it can help network execs reach the Holy Grail of “self-managing, self-healing networks that could adapt to changing conditions and demands with minimal human intervention.” ... Industry veteran Zeus Kerravala predicts that 2025 will be the year that Ethernet becomes the protocol of choice for AI-based networking. “There is currently a holy war regarding InfiniBand versus Ethernet for networking for AI with InfiniBand having taken the early lead,” Kerravala says. Ethernet has seen tremendous advancements over the last few years, and its performance is now on par with InfiniBand, he says, citing a recent test conducted by World Wide Technology. 


Building the Backbone of AI: Why Infrastructure Matters in the Race for Adoption

One of the primary challenges facing businesses when it comes to AI is having the foundational infrastructure to make it work. Depending on the use case, AI can be an incredibly demanding technology. Some algorithmic AI workloads use real-time inference, which will grossly underperform without a direct, high bandwidth, low-latency connection. ... An organization’s path to the cloud is really the central pillar of any successful AI strategy. The sheer scale at which organizations are harvesting and using data means that storing every piece of information on-premises is simply no longer viable. Instead, cloud-based data lakes and warehouses are now commonly used to store data, and having streamlined access to this data is essential. But this shift isn’t just about scale or storage – it’s about capability. AI models, particularly those requiring intensive training, often reside in the cloud, where hyperscalers can offer the power density and GPU capabilities that on-premises data centers typically cannot support. Choosing the right cloud provider in this context is of course vital, but the real game-changer lies not in the who of connectivity, but the how. Relying on the public internet for cloud access creates bottlenecks and risks, with unpredictable routes, variable latency, and compromised security.


Why all developers should adopt a safety-critical mindset

Safety-critical industries don’t just rely on reactive measures; they also invest heavily in proactive defenses. Defensive programming is a key practice here, emphasizing robust input validation, error handling, and preparation for edge cases. This same mindset can be invaluable in non-critical software development. A simple input error could crash a service if not properly handled—building systems with this in mind ensures you’re always anticipating the unexpected. Rigorous testing should also be a norm, and not just unit tests. While unit testing is valuable, it's important to go beyond that, testing real-world edge cases and boundary conditions. Consider fault injection testing, where specific failures are introduced (e.g., dropped packets, corrupted data, or unavailable resources) to observe how the system reacts. These methods complement stress testing under maximum load and simulations of network outages, offering a clearer picture of system resilience. Validating how your software handles external failures will build more confidence in your code. Graceful degradation is another principle worth adopting. If a system does fail, it should fail in a way that’s safe and understandable. For example, an online payment system might temporarily disable credit card processing but allow users to save items in their cart or check account details.


Strengthening Software Supply Chains with Dependency Management

Organizations must prioritize proactive dependency management, high-quality component selection and vigilance against vulnerabilities to mitigate escalating risks. A Software Bill of Materials (SBOM) is an essential tool in this approach, as it offers a comprehensive inventory of all software components, enabling organizations to quickly identify and address vulnerabilities across their dependencies. In fact, projects that implement an SBOM to manage open source software dependencies demonstrate a 264-day reduction in the time taken to fix vulnerabilities compared to those that do not. SBOMs provide a comprehensive list of every component within the software, enabling quicker response times to threats and bolstering overall security. However, despite the rise in SBOM usage, it is not keeping pace with the influx of new components being created, highlighting the need for enhanced automation, tooling and support for open source maintainers. ... This complacency — characterized by a false sense of security — accumulates risks that threaten the integrity of software supply chains. The rise of open source malware further complicates the landscape, as attackers exploit poor dependency management. 

Daily Tech Digest - January 22, 2025

How Operating Models Need to Evolve in 2025

“In 2025, enterprises are looking to achieve autonomous and self-healing IT environments, which is currently referred to as ‘AIOps.’ However, the use of AI will become so common in IT operations that we won’t need to call it [that] explicitly,” says Ruh in an email interview. “Instead, the term, ‘AIOps’ will become obsolete over the next two years as enterprises move towards the first wave of AI agents, where early adopters will start deploying intelligent components in their landscape able to reason and take care of tasks with an elevated level of autonomy.” ... “The IT operating model of 2025 must adapt to a landscape shaped by rapid decentralization, flatter structures, and AI-driven innovation,” says Langley in an email interview. “These shifts are driven by the need for agility in responding to changing business needs and the transformative impact of AI on decision-making, coordination and communication. Technology is no longer just a tool but a connective tissue that enables transparency and autonomy across teams while aligning them with broader organizational goals.” ... “IT leaders must transition from traditional hierarchical roles to facilitators who harness AI to enable autonomy while maintaining strategic alignment. This means creating systems for collaboration and clarity, ensuring the organization thrives in a decentralized environment,” says Langley.


Cybersecurity is tough: 4 steps leaders can take now to reduce team burnout

Whether it’s about solidifying partnerships with business managers, changing corporate culture, or correcting errant employees, peer input is golden. No matter the scenario, it’s likely that other security leaders have dealt with the same or similar situations, so their input, empathy, and advice are invaluable. ... Well-informed leaders are more likely to champion and include security in new initiatives, an important shift in culture from seeing security as a pain to embracing security as an important business tool. Such a shift greatly reduces another top stressor among CISO’s — lack of management support. In a security-centric organization, team members in all roles experience less pressure to perform miracles with no resources. And, instead of fighting with leaders for resources, the CISO has more time to focus on getting to know and better manage staff. ... Recognition, she says, boosts individual and team morale and motivation. “I am grateful for and do not take for granted having excellent leadership above me that supports me and my team. I try to make it easy for them.” And, since personal stressors also impact burnout, she encourages team members to share their personal stressors at her one-on-ones or in the group meeting where they can be supported.  


Mandatory MFA, Biometrics Make Headway in Middle East, Africa

Digital identity platforms, such as UAE Pass in the United Arab Emirates and Nafath in Saudi Arabia, integrate with existing fingerprint and facial-recognition systems and can reduce the reliance on passwords, says Chris Murphy, a managing director with the cybersecurity practice at FTI Consulting in Dubai. "With mobile devices serving as the primary gateway to digital services, smartphone-based biometric authentication is the most widely used method in public and private sectors," he says. "Some countries, such as the UAE and Saudi Arabia, are early adopters of passwordless authentication, leveraging AI-based facial recognition and behavioral analytics for seamless and secure identity verification." African nations have also rolled out national identity cards based on biometrics. In South Africa, for example, customers can walk into a bank and open an account by using their fingerprint and linking it to the national ID database, which acts as the root of trust, says BIO-Key's Sullivan. "After they verify that that person is who they say they are with the Home Affairs Ministry, they can store that fingerprint [in the system]," he says. "From then on, anytime they want to authenticate that user, they just touch a finger. They've just now started rolling out the ability to do that without even presenting your card for subsequent business."


Acronis CISO on why backup strategies fail and how to make them resilient

Start by conducting a thorough business impact analysis. Figure out which processes, applications, and data sets are mission-critical, and decide how much downtime or data loss is acceptable. The more vital the data or application, the tighter (and more expensive) your RTO and RPO targets will be. Having a strong data and systems classification system will make this process significantly easier. There’s always a trade-off: the more stringent your RTO and RPO, the higher the cost and complexity of maintaining the necessary backup infrastructure. That’s why prioritisation is key. For example, a real-time e-commerce database might need near-zero downtime, while archived records can tolerate days of recovery time. Once you establish your priorities, you can use technologies like incremental backups, continuous data protection, and cross-site replication to meet tighter RTO and RPO without overwhelming your network or your budget. ... Start by reviewing any regulatory or compliance rules you must follow; these often dictate which data must be kept and for how long. Keep in mind, that some information may not be kept longer than absolutely needed – personally identifiable information would come to mind. Next, look at the operational value of your data. 


The bitter lesson for generative AI adoption

The rapid pace of innovation and the proliferation of new models have raised concerns about technology lock-in. Lock-in occurs when businesses become overly reliant on a specific model with bespoke scaffolding that limits their ability to adapt to innovations. Upon its release, GPT-4 was the same cost as GPT-3 despite being a superior model with much higher performance. Since the GPT-4 release in March 2023, OpenAI prices have fallen another six times for input data and four times for output data with GPT-4o, released May 13, 2024. Of course, an analysis of this sort assumes that generation is sold at cost or a fixed profit, which is probably not true, and significant capital injections and negative margins for capturing market share have likely subsidized some of this. However, we doubt these levers explain all the improvement gains and price reductions. Even Gemini 1.5 Flash, released May 24, 2024, offers performance near GPT-4, costing about 85 times less for input data and 57 times less for output data than the original GPT-4. Although eliminating technology lock-in may not be possible, businesses can reduce their grip on technology adoption by using commercial models in the short run.


Staying Ahead: Key Cloud-Native Security Practices

NHIs represent machine identities used in cybersecurity. They are conceived by combining a “Secret” (an encrypted password, token, or key) and the permissions allocated to that Secret by a receiving server. In an increasingly digital landscape, the role of these machine identities and their secrets cannot be overstated. This makes the management of NHIs a top priority for organizations, particularly those in industries like financial services, healthcare, and travel. ... As technology has advanced, so too has the need for more thorough and advanced cybersecurity practices. One rapidly evolving area is the management of Non-Human Identities (NHIs), which undeniably interweaves secret data. Understanding and efficiently managing NHIs and their secrets are not just choices but an imperative for organizations operating in the digital space and leaned towards cloud-native applications. NHIs have been sharing their secrets with us for some time, communicating an urgent requirement for attention, understanding and improved security practices. They give us hints about potential security weaknesses through unique identifiers that are not unlike a travel passport. By monitoring, managing, and securely storing these identifiers and the permissions granted to them, we can bridge the troublesome chasm between the security and R&D teams, making for better-protected organizations.


3 promises every CIO should keep in 2025

To minimize disappointment, technologists need to set the expectations of business leaders. At the same time, they need to evangelize on the value of new technology. “The CIO has to be an evangelist, educator, and realist all at the same time,” says Fernandes. “IT leaders should be under-hypers rather than over-hypers, and promote technology only in the context of business cases.” ... According to Leon Roberge, CIO for Toshiba America Business Solutions and Toshiba Global Commerce Solutions, technology leaders should become more visible to the business and lead by example to their teams. “I started attending the business meetings of all the other C-level executives on a monthly basis to make sure I’m getting the voice of the business,” he says. “Where are we heading? How are we making money? How can I help business leaders overcome their challenges and meet their objectives?” ... CIOs should also build platforms for custom tools that meet the specific needs not only of their industry and geography, but of their company — and even for specific divisions. AI models will be developed differently for different industries, and different data will be used to train for the healthcare industry than for logistics, for example. Each company has its own way of doing business and its own data sets. 


5G in Business: Roadblocks, Catalysts in Adoption - Part 1

Enterprises considering 5G adoption are confronted with several challenges, key among them being high capex, security, interoperability and integration with existing infrastructure, and skills development within their workforce. Consistent coverage and navigating the complex regulatory landscape are also inhibitors to adoption. Jenn Mullen, emerging technology solutions lead at Keysight Technologies, told ISMG that business leaders must address potential security concerns, ensure seamless integration with existing IT infrastructure and demonstrate a strong return on investment. ... Early enterprise 5G projects were unsuccessful as the applications and devices weren't 5G compatible. For instance, in 2021, ArcelorMittal France conceived 5G Steel, a private cellular network serving its steelworks in Dunkirk, Mardyck and Florange (France) - to support its digitalization plans with high-speed, site-wide 5G connectivity. The private network, which covers a 10 square kilometer area, was built by French public network operator Orange. When it turned the network on in October 2022, the connecting devices were only 4G, leading to underutilization. "The availability of 5G-compatible terminals suitable for use in an industrial environment is too limited," said David Glijer, the company's director of digital transformation at the time.


Rethinking Business Models With AI

We arrive in a new era of transforming business models and organizations by leveraging the power of Gen AI. An AI-powered business model is an organizational framework that fundamentally integrates AI into one or more core aspects of how a company creates, delivers and captures value. Unlike traditional business models that merely use AI as a tool for optimization, a truly AI-powered business model exhibits distinctive characteristics, such as self-reinforcing intelligence, scalable personalization and ecosystem integration. ... As an organization moves through its AI-powered business model innovation journey, it must systematically consider the eight essentials of AI-driven business models (Figure 3) and include a holistic assessment of current state capabilities, identification of AI innovation opportunities and development of a well-defined map of the transformation journey. Following this, rapid innovation sprints should be conducted to translate strategic visions into tangible results that validate the identified AI opportunities and de-risk at-scale deployments. ... While the potential rewards are compelling — from operational efficiencies to entirely new value propositions — the journey is complex and fraught with pitfalls, not least from existing barriers. 


Increase in cyberattacks setting the stage for identity security’s rapid growth

Digital identity security is rapidly growing in importance as identity infrastructure becomes a target for cyber attackers. Misconfigurations of identity systems have become a significant concern – but many companies still seem unaware of the issue. Security expert Hed Kovetz says that “identity is always the go-to of every attacker.” As CEO and co-founder of digital identity protection firm Silverfort, he believes that protecting identity is one of their most complicated tasks. “If you ask any security team, I think identity is probably the one that is the most complex,” says Kovetz. “It’s painful: There are so many tools, so many legacy technologies and legacy infrastructure still in place.” ... To secure identity infrastructures, security specialists need to deal with both very old and very new technologies consistently. Kovetz says he first began dealing with legacy systems that could not be properly secured and could be used by attackers to spread inside the network. He later extended to protecting and other modern technologies. “I think that protecting these things end to end is the key,” says Kovetz. “Otherwise, attackers will always go to the weaker part.” ... Although the increase in cyberattacks is setting the stage for identity security’s rapid growth in importance, some organizations are still struggling to acknowledge weaknesses in their identity infrastructure.



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley