Showing posts with label authentication. Show all posts
Showing posts with label authentication. Show all posts

Daily Tech Digest - June 25, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein



Why data observability is the missing layer of modern networking

You might hear people use these terms interchangeably, but they’re not the same thing. Visibility is about what you can see – dashboard statistics, logs, uptime numbers, bandwidth figures, the raw data that tells you what’s happening across your network. Observability, on the other hand, is about what that data actually means. It’s the ability to interpret, analyse, and act on those insights. It’s not just about seeing a traffic spike but instead understanding why it happened. It’s not just spotting a latency issue, but knowing which apps are affected and where the bottleneck sits. ... Today, connectivity needs to be smart, agile, and scalable. It’s about building infrastructure that supports cloud, remote work, and everything in between. Whether you’re adding a new site, onboarding a remote team, or launching a cloud-hosted app, your network should be able to scale and respond at speed. Then there’s security, a non-negotiable layer that protects your entire ecosystem. Great security isn’t about throwing up walls, it’s about creating confidence. That means deploying zero trust principles, segmenting access, detecting threats in real time, and encrypting data, without making users lives harder. ... Finally, we come to observability. Arguably the most unappreciated of the three but quickly becoming essential. 


6 Key Security Risks in LLMs: A Platform Engineer’s Guide

Prompt injection is the AI-era equivalent of SQL injection. Attackers craft malicious inputs to manipulate an LLM, bypass safeguards or extract sensitive data. These attacks range from simple jailbreak prompts that override safety rules to more advanced exploits that influence backend systems. ... Model extraction attacks allow adversaries to systematically query an LLM to reconstruct its knowledge base or training data, essentially cloning its capabilities. These attacks often rely on automated scripts submitting millions of queries to map the model’s responses. One common technique, model inversion, involves strategically structured inputs that extract sensitive or proprietary information embedded in the model. Attackers may also use repeated, incremental queries with slight variations to amass a dataset that mimics the original training data. ... On the output side, an LLM might inadvertently reveal private information embedded in its dataset or previously entered user data. A common risk scenario involves users unknowingly submitting financial records or passwords into an AI-powered chatbot, which could then store, retrieve or expose this data unpredictably. With cloud-based LLMs, the risk extends further. Data from one organization could surface in another’s responses.


Adopting Agentic AI: Ethical Governance, Business Impact, Talent Demand, and Data Security

Agentic AI introduces a spectrum of ethical challenges that demand proactive governance. Given its capacity for independent decision-making, there is a heightened need for transparent, accountable, and ethically driven AI models. Ethical governance in Agentic AI revolves around establishing robust policies that govern decision logic, bias mitigation, and accountability. Organizations leveraging Agentic AI must prioritize fairness, inclusivity, and regulatory compliance to avoid unintended consequences. ... The integration of Agentic AI into business ecosystems promises not just automation but strategic enhancement of decision-making. These AI agents are designed to process real-time data, predict market shifts, and autonomously execute decisions that would traditionally require human intervention. In sectors such as finance, healthcare, and manufacturing, Agentic AI is optimizing supply chains, enhancing predictive analytics, and streamlining operations with unparalleled accuracy. ... One of the major concerns surrounding Agentic AI is data security. Autonomous decision-making systems require vast amounts of real-time data to function effectively, raising questions about data privacy, ownership, and cybersecurity. Cyber threats aimed at exploiting autonomous decision-making could have severe consequences, especially in sectors like finance and healthcare.


Unveiling Supply Chain Transformation: IIoT and Digital Twins

Digital twins and IIoTs are evolving technologies that are transforming the digital landscape of supply chain transformation. The IIoT aims to connect to actual physical sensors and actuators. On the other hand, DTs are replica copies that virtually represent the physical components. The DTs are invaluable for testing and simulating design parameters instead of disrupting production elements. ... Contrary to generic IoT, which is more oriented towards consumers, the IIoT enables the communication and interconnection between different machines, industrial devices, and sensors within a supply chain management ecosystem with the aim of business optimization and efficiency. The incubation of IIoT in supply chain management systems aims to enable real-time monitoring and analysis of industrial environments, including manufacturing, logistics management, and supply chain. It boosts efforts to increase productivity, cut downtime, and facilitate information and accurate decision-making. ... A supply chain equipped with IIoT will be a main ingredient in boosting real-time monitoring and enabling informed decision-making. Every stage of the supply chain ecosystem will have the impact of IIoT, like automated inventory management, health monitoring of goods and their tracking, analytics, and real-time response to meet the current marketplace. 


The state of cloud security

An important complicating factor in all this is that customers don’t always know what’s happening in cloud data centers. At the same time, De Jong acknowledges that on-premises environments have the same problem. “There’s a spectrum of issues, and a lot of overlap,” he says, something Wesley SwartelĂ© agrees with: “You have to align many things between on-prem and cloud.” Andre Honders points to a specific aspect of the cloud: “You can be in a shared environment with ten other customers. This means you have to deal with different visions and techniques that do not exist on-premises.” This is certainly the case. There are plenty of worst case scenarios to consider in the public cloud. ... However, a major bottleneck remains the lack of qualified personnel. We hear this all the time when it comes to security. And in other IT fields too, as it happens, meaning one could draw a society-wide conclusion. Nevertheless, staff shortages are perhaps more acute in this sector. Erik de Jong sees society as a whole having similar problems, at any rate. “This is not an IT problem. Just ask painters. In every company, a small proportion of the workforce does most of the work.” Wesley SwartelĂ© agrees it is a challenge for organizations in this industry to find the right people. “Finding a good IT professional with the right mindset is difficult.


As AI reshapes the enterprise, security architecture can’t afford to lag behind

Technology works both ways – it enables the attacker and the smart defender. Cybercriminals are already capitalising on its potential, using open source AI models like DeepSeek and Grok to automate reconnaissance, craft sophisticated phishing campaigns, and produce deepfakes that can convincingly impersonate executives or business partners. What makes this especially dangerous is that these tools don’t just improve the quality of attacks; they multiply their volume. That’s why enterprises need to go beyond reactive defenses and start embedding AI-aware policies into their core security fabric. It starts with applying Zero Trust to AI interactions, limiting access based on user roles, input/output restrictions, and verified behaviour. ... As attackers deploy AI to craft polymorphic malware and mimic legitimate user behaviour, traditional defenses struggle to keep up. AI is now a critical part of the enterprise security toolkit, helping CISOs and security teams move from reactive to proactive threat defense. It enables rapid anomaly detection, surfaces hidden risks earlier in the kill chain, and supports real-time incident response by isolating threats before they can spread. But AI alone isn’t enough. Security leaders must strengthen data privacy and security by implementing full-spectrum DLP, encryption, and input monitoring to protect sensitive data from exposure, especially as AI interacts with live systems. 


Identity Is the New Perimeter: Why Proofing and Verification Are Business Imperatives

Digital innovation, growing cyber threats, regulatory pressure, and rising consumer expectations all drive the need for strong identity proofing and verification. Here is why it is more important than ever:Combatting Fraud and Identity Theft: Criminals use stolen identities to open accounts, secure loans, or gain unauthorized access. Identity proofing is the first defense against impersonation and financial loss. Enabling Secure Digital Access: As more services – from banking to healthcare – go digital, strong remote verification ensures secure access and builds trust in online transactions. Regulatory Compliance: Laws such as KYC, AML, GDPR, HIPAA, and CIPA require identity verification to protect consumers and prevent misuse. Compliance is especially critical in finance, healthcare, and government sectors. Preventing Account Takeover (ATO): Even legitimate accounts are at risk. Continuous verification at key moments (e.g., password resets, high-risk actions) helps prevent unauthorized access via stolen credentials or SIM swapping. Enabling Zero Trust Security: Zero Trust assumes no inherent trust in users or devices. Continuous identity verification is central to enforcing this model, especially in remote or hybrid work environments. 


Why should companies or organizations convert to FIDO security keys?

FIDO security keys significantly reduce the risk of phishing, credential theft, and brute-force attacks. Because they don’t rely on shared secrets like passwords, they can’t be reused or intercepted. Their phishing-resistant protocol ensures authentication is only completed with the correct web origin. FIDO security keys also address insider threats and endpoint vulnerabilities by requiring physical presence, further enhancing protection, especially in high-security environments such as healthcare or public administration. ... In principle, any organization that prioritizes a secure IT infrastructure stands to benefit from adopting FIDO-based multi-factor authentication. Whether it’s a small business protecting customer data or a global enterprise managing complex access structures, FIDO security keys provide a robust, phishing-resistant alternative to passwords. That said, sectors with heightened regulatory requirements, such as healthcare, finance, public administration, and critical infrastructure, have particularly strong incentives to adopt strong authentication. In these fields, the risk of breaches is not only costly but can also have legal and operational consequences. FIDO security keys are also ideal for restricted environments, such as manufacturing floors or emergency rooms, where smartphones may not be permitted. 


Data Warehouse vs. Data Lakehouse

Data warehouses and data lakehouses have emerged as two prominent adversaries in the data storage and analytics markets, each with advantages and disadvantages. The primary difference between these two data storage platforms is that while the data warehouse is capable of handling only structured and semi-structured data, the data lakehouse can store unlimited amounts of both structured and unstructured data – and without any limitations. ... Traditional data warehouses have long supported all types of business professionals in their data storage and analytics endeavors. This approach involves ingesting structured data into a centralized repository, with a focus on warehouse integration and business intelligence reporting. Enter the data lakehouse approach, which is vastly superior for deep-dive data analysis. The lakehouse has successfully blended characteristics of the data warehouse and the data lake to create a scalable and unrestricted solution. The key benefit of this approach is that it enables data scientists to quickly extract insights from raw data with advanced AI tools. ... Although a data warehouse supports BI use cases and provides a “single source of truth” for analytics and reporting purposes, it can also become difficult to manage as new data sources emerge. The data lakehouse has redefined how global businesses store and process data. 


AI or Data Governance? Gartner Says You Need Both

Data and analytics leaders, such as chief data officers, or CDOs, and chief data and analytics officers, or CDAOs, play a significant role in driving their organizations' data and analytics, D&A, successes, which are necessary to show business value from AI projects. Gartner predicts that by 2028, 80% of gen AI business apps will be developed on existing data management platforms. Their analysts say, "This is the best time to be in data and analytics," and CDAOs need to embrace the AI opportunity eyed by others in the C-suite, or they will be absorbed into other technical functions. With high D&A ambitions and AI pilots becoming increasingly ubiquitous, focus is shifting toward consistent execution and scaling. But D&A leaders are overwhelmed with their routine data management tasks and need a new AI strategy. ... "We've never been good at governance, and now AI demands that we be even faster, which means you have to take more risks and be prepared to fail. We have to accept two things: Data will never be fully governed. Secondly, attempting to fully govern data before delivering AI is just not realistic. We need a more practical solution like trust models," Zaidi said. He said trust models provide a trust rating for data assets by examining their value, lineage and risk. They offer up-to-date information on data trustworthiness and are crucial for fostering confidence. 

Daily Tech Digest - June 06, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley


The intersection of identity security and data privacy laws for a safe digital space

The integration of identity security with data privacy has become essential for corporations, governing bodies, and policymakers. Compliance regulations are set by frameworks such as the Digital Personal Data Protection (DPDP) Bill and the CERT-In directives – but encryption and access control alone are no longer enough. AI-driven identity security tools flag access combinations before they become gateways to fraud, monitor behavior anomalies in real-time, and offer deep, contextual visibility into both human and machine identities. All these factors combined bring about compliance-free, trust-building resilient security: proactive security that is self-adjusting, overcoming various challenges encountered today. By aligning intelligent identity security tools with privacy regulations, organisations gain more than just protection—they earn credibility. ... The DPDP Act tracks closely to global benchmarks such as GDPR and data protection regulations in Singapore and Australia which mandate organisations to implement appropriate security measures to protect personal data and amp up response to data breaches. They also assert that organisations that embrace and prioritise data privacy and identity security stand to gain the optimum level of reduced risk and enhanced trust from customers, partners and regulators.


Who needs real things when everything can be a hologram?

Meta founder and CEO Mark Zuckerberg said recently on Theo Von’s “This Past Weekend” podcast that everything is shifting to holograms. A hologram is a three-dimensional image that represents an object in a way that allows it to be viewed from different angles, creating the illusion of depth. Zuckerberg predicts that most of our physical objects will become obsolete and replaced by holographic versions seen through augmented reality (AR) glasses. The conversation floated the idea that books, board games, ping-pong tables, and even smartphones could all be virtualized, replacing the physical, real-world versions. Zuckerberg also expects that somewhere between one and two billion people could replace their smartphones with AR glasses within four years. One potential problem with that prediction: the public has to want to replace physical objects with holographic versions. So far, Apple’s experience with Apple Vision Pro does not imply that the public is clamoring for holographic replacements. ... I have no doubt that holograms will increasingly become ubiquitous in our lives. But I doubt that a majority will ever prefer a holographic virtual book over a physical book or even a physical e-book reader. The same goes for other objects in our lives. I also suspect both Zuckerberg’s motives and his predictive powers.


How AI Is Rewriting the CIO’s Workforce Strategy

With the mystique fading, enterprises are replacing large prompt-engineering teams with AI platform engineers, MLOps architects, and cross-trained analysts. A prompt engineer in 2023 often becomes a context architect by 2025; data scientists evolve into AI integrators; business-intelligence analysts transition into AI interaction designers; and DevOps engineers step up as MLOps platform leads. The cultural shift matters as much as the job titles. AI work is no longer about one-off magic, it is about building reliable infrastructure. CIOs generally face three choices. One is to spend on systems that make prompts reproducible and maintainable, such as RAG pipelines or proprietary context platforms. Another is to cut excessive spending on niche roles now being absorbed by automation. The third is to reskill internal talent, transforming today’s prompt writers into tomorrow’s systems thinkers who understand context flows, memory management, and AI security. A skilled prompt engineer today can become an exceptional context architect tomorrow, provided the organization invests in training. ... Prompt engineering isn’t dead, but its peak as a standalone role may already be behind us. The smartest organizations are shifting to systems that abstract prompt complexity and scale their AI capability without becoming dependent on a single human’s creativity.


Biometric privacy on trial: The constitutional stakes in United States v. Brown

The divergence between the two federal circuit courts has created a classic “circuit split,” a situation that almost inevitably calls for resolution by the U.S. Supreme Court. Legal scholars point out that this split could not be more consequential, as it directly affects how courts across the country treat compelled access to devices that contain vast troves of personal, private, and potentially incriminating information. What’s at stake in the Brown decision goes far beyond criminal law. In the digital age, smartphones are extensions of the self, containing everything from personal messages and photos to financial records, location data, and even health information. Unlocking one’s device may reveal more than a house search could have in the 18th century, and the very kind of search the Bill of Rights was designed to restrict. If the D.C. Circuit’s reasoning prevails, biometric security methods like Apple’s Face ID, Samsung’s iris scans, and various fingerprint unlock systems could receive constitutional protection when used to lock private data. That, in turn, could significantly limit law enforcement’s ability to compel access to devices without a warrant or consent. Moreover, such a ruling would align biometric authentication with established protections for passcodes. 


GenAI controls and ZTNA architecture set SSE vendors apart

“[SSE] provides a range of security capabilities, including adaptive access based on identity and context, malware protection, data security, and threat prevention, as well as the associated analytics and visibility,” Gartner writes. “It enables more direct connectivity for hybrid users by reducing latency and providing the potential for improved user experience.” Must-haves include advanced data protection capabilities – such as unified data leak protection (DLP), content-aware encryption, and label-based controls – that enable enterprises to enforce consistent data security policies across web, cloud, and private applications. Securing Software-as-a-Service (SaaS) applications is another important area, according to Gartner. SaaS security posture management (SSPM) and deep API integrations provide real-time visibility into SaaS app usage, configurations, and user behaviors, which Gartner says can help security teams remediate risks before they become incidents. Gartner defines SSPM as a category of tools that continuously assess and manage the security posture of SaaS apps. ... Other necessary capabilities for a complete SSE solution include digital experience monitoring (DEM) and AI-driven automation and coaching, according to Gartner. 


5 Risk Management Lessons OT Cybersecurity Leaders Can’t Afford to Ignore

A weak or shared passwords, outdated software, and misconfigured networks are consistently leveraged by malicious actors. Seemingly minor oversights can create significant gaps in an organization’s defenses, allowing attackers to gain unauthorized access and cause havoc. When the basics break down, particularly in converged IT/OT environments where attackers only need one foothold, consequences escalate fast. ... One common misconception in critical infrastructure is that OT systems are safe unless directly targeted. However, the reality is far more nuanced. Many incidents impacting OT environments originate as seemingly innocuous IT intrusions. Attackers enter through an overlooked endpoint or compromised credential in the enterprise network and then move laterally into the OT environment through weak segmentation or misconfigured gateways. This pattern has repeatedly emerged in the pipeline sector. ... Time and again, post-mortems reveal the same pattern: organizations lacking in tested procedures, clear roles, or real-world readiness. A proactive posture begins with rigorous risk assessments, threat modeling, and vulnerability scanning—not once, but as a cycle that evolves with the threat landscape. This plan should outline clear procedures for detecting, containing, and recovering from cyber incidents. 


You Can Build Authentication In-House, But Should You?

Auth isn’t a static feature. It evolves — layer by layer — as your product grows, your user base diversifies, and enterprise customers introduce new requirements. Over time, the simple system you started with is forced to stretch well beyond its original architecture. Every engineering team that builds auth internally will encounter key inflection points — moments when the complexity, security risk, and maintenance burden begin to outweigh the benefits of control. ... Once you’re selling into larger businesses, SSO becomes a hard requirement for enterprises. Customers want to integrate with their own identity providers like Okta, Microsoft Entra, or Google Workspace using protocols like SAML or OIDC. Implementing these protocols is non-trivial, especially when each customer has their own quirks and expectations around onboarding, metadata exchange, and user mapping. ... Once SSO is in place, the following enterprise requirement is often SCIM (System for Cross-domain Identity Management). SCIM, also known as Directory Sync, enables organizations to provision automatically and deprovision user accounts through their identity provider. Supporting it properly means syncing state between your system and theirs and handling partial failures gracefully. ... The newest wave of complexity in modern authentication comes from AI agents and LLM-powered applications. 


Developer Joy: A Better Way to Boost Developer Productivity

Play isn’t just fluff; it’s a tool. Whether it’s trying something new in a codebase, hacking together a prototype, or taking a break to let the brain wander, joy helps developers learn faster, solve problems more creatively, and stay engaged. ... Aim to reduce friction and toil, the little frustrations that break momentum and make work feel like a slog. Long build and test times are common culprits. At Gradle, the team is particularly interested in improving the reliability of tests by giving developers the right tools to understand intermittent failures. ... When we’re stuck on a problem, we’ll often bang our head against the code until midnight, without getting anywhere. Then in the morning, suddenly it takes five minutes for the solution to click into place. A good night’s sleep is the best debugging tool, but why? What happens? This is the default mode network at work. The default mode network is a set of connections in your brain that activates when you’re truly idle. This network is responsible for many vital brain functions, including creativity and complex problem-solving. Instead of filling every spare moment with busywork, take proper breaks. Go for a walk. Knit. Garden. "Dead time" in these examples isn't slacking, it’s deep problem-solving in disguise.


Get out of the audit committee: Why CISOs need dedicated board time

The problem is the limited time allocated to CISOs in audit committee meetings is not sufficient for comprehensive cybersecurity discussions. Increasingly, more time is needed for conversations around managing the complex risk landscape. In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO. He found these particularly important for enabling frank conversations, which might centre on budget, roadblocks to new security implementations or whether he and his team are getting enough time to implement security programs. “They may ask: ‘How are things really going? Are you getting the support you need?’ It’s a transparent conversation without the other executives of the company being present.” ... In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO.


Mind the Gap: AI-Driven Data and Analytics Disruption

The Holy Grail of metadata collection is extracting meaning from program code: data structures and entities, data elements, functionality, and lineage. For me, this is one of the most potentially interesting and impactful applications of AI to information management. I’ve tried it, and it works. I loaded an old C program that had no comments but reasonably descriptive variable names into ChatGPT, and it figured out what the program was doing, the purpose of each function, and gave a description for each variable. Eventually this capability will be used like other code analysis tools currently used by development teams as part of the CI/CD pipeline. Run one set of tools to look for code defects. Run another to extract and curate metadata. Someone will still have to review the results, but this gets us a long way there. ... Large language models can be applied in analytics a couple different ways. The first is to generate the answer solely from the LLM. Start by ingesting your corporate information into the LLM as context. Then, ask it a question directly and it will generate an answer. Hopefully the correct answer. But would you trust the answer? Associative memories are not the most reliable for database-style lookups. Imagine ingesting all of the company’s transactions then asking for the total net revenue for a particular customer. Why would you do that? Just use a database. 

Daily Tech Digest - March 24, 2025


Quote for the day:

"To be an enduring, great company, you have to build a mechanism for preventing or solving problems that will long outlast any one individual leader" -- Howard Schultz



Identity Authentication: How Blockchain Puts Users In Control

One key benefit of blockchain is that it's decentralized. Instead of a single database that records user information -- one ripe for data breaches -- blockchain uses something called decentralized identifiers (DIDs). DIDs are cryptographic key pairs that allow users to have more control over their online identities. They are becoming more popular, with Forbes claiming they're the future of online identity. To explain what DIDs are, let's start by explaining what they are not. Today, most people interact online via a centralized identifier, such as an email address, username or password. This allows the database to store your digital information on that platform. But single databases are more vulnerable to data breaches and users have no control over their data. When we use centralized platforms, we really hand over all our trust to whatever platform we use. DIDs provide a new way to access information while allowing users to maintain ownership. ... That said, identity authentication and blockchain technology don't have to be complex topics. They can be easy to use but require intuitive platforms and simple user experiences. The EU's digital policies offer a strong foundation for integrating blockchain. If blockchain becomes part of the initial rulemaking, it could fuel more widespread adoption. There's a long way to go before people feel confident understanding concepts like DIDs. 


Cloud providers aren’t delivering on security promises

With 44% of businesses already spending between £101,000 and £250,000 on cloud migrations in the past 12 months there is a clear need for organizations to ensure they are working with trusted partners who can meet this security need. Otherwise, companies will run the risk of having to spend more to not only move to new suppliers but also respond to the cost of a data breach. The cost and resources needed for organizations to boost their own security skills and technology is often too prohibitive. ... However, despite the clear advantages to security and job stability, only 22% of CISOs use a channel partner in their cloud migration process. This is leaving many exposed to unnecessary risk from attacks or job loss. “It is clear that many organizations are struggling when it comes to securing cloud environments. A combination of underdelivering cloud providers and a lack of in-house skills is resulting in a dangerous situation which can leave valuable company data exposed to risk. Simply adding more technology will not solve this problem,” said Clare Loveridge, VP and GM EMEA at Arctic Wolf. “Securing the cloud is a shared responsibility between the cloud provider and the organization. While cloud providers offer good security tools it is important that you have a team of security experts to help you run the operation. 


CISOs are taking on ever more responsibilities and functional roles – has it gone too far?

“The CISO role has expanded significantly over the years as companies realize that information security has a unique picture of what is going on across the organization,” says Doug Kersten, CISO of software company Appfire. “Traditionally, CISOs have focused on fundamental security controls and threat mitigation,” he adds. “However, today they are increasingly expected to play a central role in maintaining business resilience and compliance. Many CISOs are now responsible for risk management, business continuity, and disaster recovery as well as overseeing regulatory compliance across various jurisdictions.” ... “We’re seeing a convergence of roles under head of security because of the background and problem-solving skills of these people. They have become problem-solver in chief,” says Steve Martano, IANS Research faculty and executive cyber recruiter at Artico Search. That, though, comes with challenges. “CISOs are already experiencing high levels of stress, with recent data highlighting that nearly one in four CISOs are considering leaving the profession due to stress,” Kersten says. “Many CISOs only stay in the role for two to three years. With this, the expectations placed on CISOs are undeniably growing, and organizations risk overburdening them without sufficient resources and support. ..."


Fixing the Fixing Process: Why Automation is Key to Cybersecurity Resilience

Cybersecurity environments have seen nonstop evolution, driven by increasingly sophisticated attack techniques, the expansion of complex cloud-native architecture, and the rise of AI-powered threats that outpace traditional defense strategies. At the same time, development timelines have accelerated, pushing security teams to keep pace without becoming a bottleneck. ... It’s a daunting and intimidating task that requires sufficient time and attention. Moreover, adopting automation means ensuring that security and development teams trust the outputs. Many organizations struggle with this transition because automation tools, if not properly configured, can generate inaccuracies or miss critical context. Security teams fear losing control over decision-making, while developers worry about receiving even more noise if automation isn’t fine-tuned. ... Attackers are already leveraging AI to exploit vulnerabilities rapidly, while security teams often rely on static and manual processes that have no chance of keeping up. AI-enabled EAPs help teams proactively identify and mitigate vulnerabilities before adversaries can exploit them. By automating exposure assessments, organizations can shrink the reconnaissance window available to attackers, limiting their ability to target common vulnerabilities and exposures (CVEs), security misconfigurations, software flaws, and other weaknesses. 


Can we make AI less power-hungry? These researchers are working on it.

Two key drivers of that efficiency were the increasing adoption of GPU-based computing and improvements in the energy efficiency of those GPUs. “That was really core to why Nvidia was born. We paired CPUs with accelerators to drive the efficiency onward,” said Dion Harris, head of Data Center Product Marketing at Nvidia. In the 2010–2020 period, Nvidia data center chips became roughly 15 times more efficient, which was enough to keep data center power consumption steady. ... The increasing power consumption has pushed the computer science community to think about how to keep memory and computing requirements down without sacrificing performance too much. “One way to go about it is reducing the amount of computation,” said Jae-Won Chung, a researcher at the University of Michigan and a member of the ML Energy Initiative. One of the first things researchers tried was a technique called pruning, which aimed to reduce the number of parameters. Yann LeCun, now the chief AI scientist at Meta, proposed this approach back in 1989, terming it (somewhat menacingly) “the optimal brain damage.” You take a trained model and remove some of its parameters, usually targeting the ones with a value of zero, which add nothing to the overall performance. 


Five Years of Cloud Innovation: 2020 to 2025

The FinOps organization and the implementation of FinOps standards across cloud providers has been the most impactful development over the last five years, states Allen Brokken, head of customer engineering at Google, in an online interview. This has fundamentally transformed how organizations understand the business value of their cloud deployments, he states. "Standardization has enabled better comparisons between cloud providers and created a common language for technical teams, business unit owners, and CFOs to discuss cloud operations." ... The public cloud has democratized access to technology and increased accessibility for organizations across industries that have faced intense volatility and change in the past five years, Adams observes via email. "This innovation has facilitated a new level of co-innovation and enabled new business models that allow companies to realize future opportunities with ease." Public cloud platforms offer adopters immense benefits, Adams says. "With the public cloud, businesses can scale IT infrastructure on-demand without significant upfront investment." This flexibility comes with a reduced total cost of ownership, since public cloud solutions often lead to lower costs for hardware, software and maintenance. 


Cloud, colocation or on-premise? Consider all options

Following the rush to the cloud, the cost implications should have prompted some companies to move back to on-premise, but it hasn’t, according to Lamb. “I thought it might happen with AI, because potentially the core per hour rate for AI is going to be far higher, but it hasn’t.” Lamb’s advice for CIOs is to be wary of being tied into particular providers or AI models, noting that Microsoft is creating models and not charging for them, knowing that companies will still be paying for the compute to use them. Lamb also says that, whether we’re talking on-premise, colocation or cloud, the potential for retrofitting existing capacity is limited, at least when it comes to capacity aimed at AI. After all, those GPUs often require liquid cooling to the chip. This changes the infrastructure equation, says Lamb, increasing the footprint for cooling infrastructure in comparison to compute. Quite apart from the real estate impact, this isn’t something most enterprises will want to tackle. Also, cooling and power will only become more complicated. Andrew Bradner, Schnieder Electric’s general manager for cooling, is confident that many sectors will continue to operate on-premise datacentre capacity – life sciences, fintech and financial, for example. 


How GenAI is Changing Work by Supporting, Not Replacing People

A common misconception is that AI adoption leads to workforce reduction. While automation has historically replaced repetitive, manual labor, the rise of GenAI is fundamentally different. Unlike traditional automation, which replaces human effort, GenAI amplifies human potential by reducing workload friction. The same science study reinforces this point: AI doesn’t just increase speed; it also improves work quality. Employees using AI-powered tools experienced a 40% reduction in task completion time and an 18% improvement in output quality, demonstrating that AI is an efficiency enabler rather than a job replacer. Consider the historical trend: The Industrial Revolution automated factory work but also created entirely new job categories and industries. Similarly, the digital revolution reduced the need for clerical roles yet generated millions of jobs in software development, cybersecurity, and IT infrastructure. ... Biases in machine learning models are still an issue since AI based on data from the past will perpetuate prevailing biases, and thus human monitoring is critical. GenAI can also generate misleading or inaccurate results, further highlighting the need for oversight. AI can generate reports, but it cannot negotiate deals, understand organizational culture, or make leadership decisions. 


Frankenstein Fraud: How to Protect Yourself Against Synthetic Identity Fraud

Synthetic identity fraud is an exercise in patience, at least on the criminal's part, especially if they're using the Social Security number of a child. The identity is constructed by using a real Social Security number in combination with an unassociated name, address, date of birth, phone number or other piece of identifying information to create a new "whole" identity. Criminals can purchase SSNs on the dark web, steal them from data breaches or con them from people through things like phishing attacks and other scams. Synthetic identity theft flourishes because of a simple flaw in the US financial and credit system. When the criminal uses the synthetic identity to apply to borrow from a lender, it's typically denied credit because there's no record of that identity in their system. The thieves are expecting this since children and teens may have no credit or a thin history, and elderly individuals may have poor credit scores. Once an identity applies for an account and is presented to a credit bureau, it's shared with other credit bureaus. That act is enough to allow credit bureaus to recognize the synthetic identity as a real person, even if there's little activity or evidence to support that it's a real person. Once the identity is established, the fraudsters can start borrowing credit from lenders.


Will AI erode IT talent pipelines?

“The pervasive belief that gen AI is an automation technology, that gen AI increases productivity by automation, is a huge fallacy,” says Suda, though he admits it will eliminate the need for certain skills — including IT skills. “Losing skills is fine,” he says, adding that machines have been eliminating the need for certain skills for centuries. “What gen AI is helping us do is learn new skills and learn new things, and that does create an impact on the workforce. “What it is eroding is the opportunity for junior IT staff to have the same experiences that junior staff have today or yesterday,” he says. “Therefore, there’s an erosion of yesterday’s talent pipeline. Yesterday’s talent pipeline is changing, and the steps to get through it are changing from what we have today to what we need [in the future].” Steven Kirz, senior partner for operations excellence at consulting firm West Monroe, shares similar insights. Like Suda, Kirz says AI doesn’t “universally make everybody more productive. It’s unequal across roles and activities.” Kirz also says both research and anecdotal evidence show that AI is replacing lower-level, mundane, and repetitive tasks. In IT, that tends to be reporting, clerical, data entry, and administrative activities. “And routine roles being replaced [by technology] doesn’t feel new to me,” he adds.


Daily Tech Digest - January 30, 2025


Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley


Doing authentication right

Like encryption, authentication is one of those things that you are tempted to “roll your own” but absolutely should not. The industry has progressed enough that you should definitely “buy and not build” your authentication solution. Plenty of vendors offer easy-to-implement solutions and stay diligently on top of the latest security issues. Authentication also becomes a tradeoff between security and a good user experience. ... Passkeys are a relatively new technology and there is a lot of FUD floating around out there about them. The bottom line is that they are safe, secure, and easy for your users. They should be your primary way of authenticating. Several vendors make implementing passkeys not much harder than inserting a web component in your application. ... Forcing users to use hard-to-remember passwords means they will be more likely to write them down or use a simple password that meets the requirements. Again, it may seem counterintuitive, but XKCD has it right. In addition, the longer the password, the harder it is to crack. Let your users create long, easy-to-remember passwords rather than force them to use shorter, difficult-to-remember passwords. ... Six digits is the outer limit for OTP links, and you should consider shorter ones. Under no circumstances should you require OTPs longer than six digits because they are vastly harder for users to keep in short-term memory.


Augmenting Software Architects with Artificial Intelligence

Technical debt is mistakenly thought of as just a source code problem, but the concept is also applicable to source data (this is referred to as data debt) as well as your validation assets. AI has been used for years to analyze existing systems to identify potential opportunities to improve the quality (to pay down technical debt). SonarQube, CAST SQG and BlackDuck’s Coverity Static Analysis statically analyze existing code. Applitools Visual AI dynamically finds user interface (UI) bugs and Veracode’s DAST to find runtime vulnerabilities in web apps. The advantages of this use case are that it pinpoints aspects of your implementation that potentially should be improved. As described earlier, AI tooling offers to the potential for greater range, thoroughness, and trustworthiness of the work products as compared with that of people. Drawbacks to using AI-tooling to identify technical debt include the accuracy, IP, and privacy risks described above. ... As software architects we regularly work with legacy implementations that they need to leverage and often evolve. This software is often complex, using a myriad of technologies for reasons that have been forgotten over time. Tools such as CAST Imaging visualizes existing code and ChartDB visualizes legacy data schemas to provide a “birds-eye view” of the actual situation that you face.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

Your first step should be to evaluate the state of your company’s cyber defenses, including communications and IT infrastructure, and the cybersecurity measures you already have in place—identifying any vulnerabilities and gaps. One vulnerability to watch for is a dependence on multiple security platforms, patches, policies, hardware, and software, where a lack of tight integration can create gaps that hackers can readily exploit. Consider using operational resilience assessment software as part of the exercise, and if you lack the internal know-how or resources to manage the assessment, consider enlisting a third-party operational resilience risk consultant. ... Aging network communications hardware and software, including on-premises systems and equipment, are top targets for hackers during a disaster because they often include a single point of failure that’s readily exploitable. The best counter in many cases is to move the network and other key communications infrastructure (a contact center, for example) to the cloud. Not only do cloud-based networks such as SD-WAN, (software-defined wide area network) have the resilience and flexibility to preserve connectivity during a disaster, they also tend to come with built-in cybersecurity measures.


California’s AG Tells AI Companies Practically Everything They’re Doing Might Be Illegal

“The AGO encourages the responsible use of AI in ways that are safe, ethical, and consistent with human dignity,” the advisory says. “For AI systems to achieve their positive potential without doing harm, they must be developed and used ethically and legally,” it continues, before dovetailing into the many ways in which AI companies could, potentially, be breaking the law. ... There has been quite a lot of, shall we say, hyperbole, when it comes to the AI industry and what it claims it can accomplish versus what it can actually accomplish. Bonta’s office says that, to steer clear of California’s false advertising law, companies should refrain from “claiming that an AI system has a capability that it does not; representing that a system is completely powered by AI when humans are responsible for performing some of its functions; representing that humans are responsible for performing some of a system’s functions when AI is responsible instead; or claiming without basis that a system is accurate, performs tasks better than a human would, has specified characteristics, meets industry or other standards, or is free from bias.” ... Bonta’s memo clearly illustrates what a legal clusterfuck the AI industry represents, though it doesn’t even get around to mentioning U.S. copyright law, which is another legal gray area where AI companies are perpetually running into trouble.


Knowledge graphs: the missing link in enterprise AI

Knowledge graphs are a layer of connective tissue that sits on top of raw data stores, turning information into contextually meaningful knowledge. So in theory, they’d be a great way to help LLMs understand the meaning of corporate data sets, making it easier and more efficient for companies to find relevant data to embed into queries, and making the LLMs themselves faster and more accurate. ... Knowledge graphs reduce hallucinations, he says, but they also help solve the explainability challenge. Knowledge graphs sit on top of traditional databases, providing a layer of connection and deeper understanding, says Anant Adya, EVP at Infosys. “You can do better contextual search,” he says. “And it helps you drive better insights.” Infosys is now running proof of concepts to use knowledge graphs to combine the knowledge the company has gathered over many years with gen AI tools. ... When a knowledge graph is used as part of the RAG infrastructure, explicit connections can be used to quickly zero in on the most relevant information. “It becomes very efficient,” said Duvvuri. And companies are taking advantage of this, he says. “The hard question is how many of those solutions are seen in production, which is quite rare. But that’s true of a lot of gen AI applications.”


U.S. Copyright Office says AI generated content can be copyrighted — if a human contributes to or edits it

The Copyright Office determined that prompts are generally instructions or ideas rather than expressive contributions, which are required for copyright protection. Thus, an image generated with a text-to-image AI service such as Midjourney or OpenAI’s DALL-E 3 (via ChatGPT), on its own could not qualify for copyright protection. However, if the image was used in conjunction with a human-authored or human-edited article (such as this one), then it would seem to qualify. Similarly, for those looking to use AI video generation tools such as Runway, Pika, Luma, Hailuo, Kling, OpenAI Sora, Google Veo 2 or others, simply generating a video clip based on a description would not qualify for copyright. Yet, a human editing together multiple AI generated video clips into a new whole would seem to qualify. The report also clarifies that using AI in the creative process does not disqualify a work from copyright protection. If an AI tool assists an artist, writer or musician in refining their work, the human-created elements remain eligible for copyright. This aligns with historical precedents, where copyright law has adapted to new technologies such as photography, film and digital media. ... While some had called for additional protections for AI-generated content, the report states that existing copyright law is sufficient to handle these issues.


From connectivity to capability: The next phase of private 5G evolution

Faster connectivity is just one positive aspect of private 5G networks; they are the basis of the current digital era. These networks outperform conventional public 5G capabilities, giving businesses incomparable control, security, and flexibility. For instance, private 5G is essential to the seamless connection of billions of devices, ensuring ultra-low latency and excellent reliability in the worldwide IoT industry, which has the potential to reach $650.5 billion by 2026, as per Markets and Markets. Take digital twins, for example—virtual replicas of physical environments such as factories or entire cities. These replicas require real-time data streaming and ultra-reliable bandwidth to function effectively. Private 5G enables this by delivering consistent performance, turning theoretical models into practical tools that improve operational efficiency and decision-making. ... Also, for sectors that rely on efficiency and precision, the private 5G is making big improvements in this area. For instance, in the logistics sector, it connects fleets, warehouses, and ports with fast, low-latency networks, streamlining operations throughout the supply chain. In fleet management, private 5G allows real-time tracking of vehicles, improving route planning and fuel use. 


American CISOs should prepare now for the coming connected-vehicle tech bans

The rule BIS released is complex and intricate and relies on many pre-existing definitions and policies used by the Commerce Department for different commercial and industrial matters. However, in general, the restrictions and compliance obligations under the rule affect the entire US automotive industry, including all-new, on-road vehicles sold in the United States (except commercial vehicles such as heavy trucks, for which rules will be determined later.) All companies in the automotive industry, including importers and manufacturers of CVs, equipment manufacturers, and component suppliers, will be affected. BIS said it may grant limited specific authorizations to allow mid-generation CV manufacturers to participate in the rule’s implementation period, provided that the manufacturers can demonstrate they are moving into compliance with the next generation. ... Connected vehicles and related component suppliers are required to scrutinize the origins of vehicle connectivity systems (VCS) hardware and automated driving systems (ADS) software to ensure compliance. Suppliers must exclude components with links to the PRC or Russia, which has significant implications for sourcing practices and operational processes.


What to know about DeepSeek AI, from cost claims to data privacy

"Users need to be aware that any data shared with the platform could be subject to government access under China's cybersecurity laws, which mandate that companies provide access to data upon request by authorities," Adrianus Warmenhoven, a member of NordVPN's security advisory board, told ZDNET via email. According to some observers, the fact that R1 is open-source means increased transparency, giving users the opportunity to inspect the model's source code for signs of privacy-related activity. Regardless, DeepSeek also released smaller versions of R1, which can be downloaded and run locally to avoid any concerns about data being sent back to the company (as opposed to accessing the chatbot online). ... "DeepSeek's new AI model likely does use less energy to train and run than larger competitors' models," confirms Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "However, I doubt this marks the start of a long-term trend in lower energy consumption. AI's power stems from data, algorithms, and compute -- which rely on ever-improving chips. When developers have previously found ways to be more efficient, they have typically reinvested those gains into making even bigger, more powerful models, rather than reducing overall energy usage."


The AI Imperative: How CIOs Can Lead the Charge

For CIOs, AGI will take this to the next level. Imagine systems that don't just fix themselves but also strategize, optimize and innovate. AGI could automate 90% of IT operations, freeing up teams to focus on strategic initiatives. It could revolutionize cybersecurity by anticipating and neutralizing threats before they strike. It could transform data into actionable insights, driving smarter decisions across the organization. The key is to begin incrementally, prove the value and scale strategically. AGI isn't just a tool; it's a game-changer. ... Cybersecurity risks are real and imminent. Picture this: you're using an open-source AI model and suddenly, your system gets hacked. Turns out, a malicious contributor slipped in some rogue code. Sounds like a nightmare, right? Open-source AI is powerful, but has its fair share of risks. Vulnerabilities in the code, supply chain attacks and lack of appropriate vendor support are absolutely real concerns. But this is true for any new technology. With the right safeguards, we can minimize and mitigate these risks. Here's what I recommend: Regularly review and update open-source libraries. CIOs should encourage their teams to use tools like software composition analysis to detect suspicious changes. Train your team to manage and secure open-source AI deployments. 

Daily Tech Digest - January 22, 2025

How Operating Models Need to Evolve in 2025

“In 2025, enterprises are looking to achieve autonomous and self-healing IT environments, which is currently referred to as ‘AIOps.’ However, the use of AI will become so common in IT operations that we won’t need to call it [that] explicitly,” says Ruh in an email interview. “Instead, the term, ‘AIOps’ will become obsolete over the next two years as enterprises move towards the first wave of AI agents, where early adopters will start deploying intelligent components in their landscape able to reason and take care of tasks with an elevated level of autonomy.” ... “The IT operating model of 2025 must adapt to a landscape shaped by rapid decentralization, flatter structures, and AI-driven innovation,” says Langley in an email interview. “These shifts are driven by the need for agility in responding to changing business needs and the transformative impact of AI on decision-making, coordination and communication. Technology is no longer just a tool but a connective tissue that enables transparency and autonomy across teams while aligning them with broader organizational goals.” ... “IT leaders must transition from traditional hierarchical roles to facilitators who harness AI to enable autonomy while maintaining strategic alignment. This means creating systems for collaboration and clarity, ensuring the organization thrives in a decentralized environment,” says Langley.


Cybersecurity is tough: 4 steps leaders can take now to reduce team burnout

Whether it’s about solidifying partnerships with business managers, changing corporate culture, or correcting errant employees, peer input is golden. No matter the scenario, it’s likely that other security leaders have dealt with the same or similar situations, so their input, empathy, and advice are invaluable. ... Well-informed leaders are more likely to champion and include security in new initiatives, an important shift in culture from seeing security as a pain to embracing security as an important business tool. Such a shift greatly reduces another top stressor among CISO’s — lack of management support. In a security-centric organization, team members in all roles experience less pressure to perform miracles with no resources. And, instead of fighting with leaders for resources, the CISO has more time to focus on getting to know and better manage staff. ... Recognition, she says, boosts individual and team morale and motivation. “I am grateful for and do not take for granted having excellent leadership above me that supports me and my team. I try to make it easy for them.” And, since personal stressors also impact burnout, she encourages team members to share their personal stressors at her one-on-ones or in the group meeting where they can be supported.  


Mandatory MFA, Biometrics Make Headway in Middle East, Africa

Digital identity platforms, such as UAE Pass in the United Arab Emirates and Nafath in Saudi Arabia, integrate with existing fingerprint and facial-recognition systems and can reduce the reliance on passwords, says Chris Murphy, a managing director with the cybersecurity practice at FTI Consulting in Dubai. "With mobile devices serving as the primary gateway to digital services, smartphone-based biometric authentication is the most widely used method in public and private sectors," he says. "Some countries, such as the UAE and Saudi Arabia, are early adopters of passwordless authentication, leveraging AI-based facial recognition and behavioral analytics for seamless and secure identity verification." African nations have also rolled out national identity cards based on biometrics. In South Africa, for example, customers can walk into a bank and open an account by using their fingerprint and linking it to the national ID database, which acts as the root of trust, says BIO-Key's Sullivan. "After they verify that that person is who they say they are with the Home Affairs Ministry, they can store that fingerprint [in the system]," he says. "From then on, anytime they want to authenticate that user, they just touch a finger. They've just now started rolling out the ability to do that without even presenting your card for subsequent business."


Acronis CISO on why backup strategies fail and how to make them resilient

Start by conducting a thorough business impact analysis. Figure out which processes, applications, and data sets are mission-critical, and decide how much downtime or data loss is acceptable. The more vital the data or application, the tighter (and more expensive) your RTO and RPO targets will be. Having a strong data and systems classification system will make this process significantly easier. There’s always a trade-off: the more stringent your RTO and RPO, the higher the cost and complexity of maintaining the necessary backup infrastructure. That’s why prioritisation is key. For example, a real-time e-commerce database might need near-zero downtime, while archived records can tolerate days of recovery time. Once you establish your priorities, you can use technologies like incremental backups, continuous data protection, and cross-site replication to meet tighter RTO and RPO without overwhelming your network or your budget. ... Start by reviewing any regulatory or compliance rules you must follow; these often dictate which data must be kept and for how long. Keep in mind, that some information may not be kept longer than absolutely needed – personally identifiable information would come to mind. Next, look at the operational value of your data. 


The bitter lesson for generative AI adoption

The rapid pace of innovation and the proliferation of new models have raised concerns about technology lock-in. Lock-in occurs when businesses become overly reliant on a specific model with bespoke scaffolding that limits their ability to adapt to innovations. Upon its release, GPT-4 was the same cost as GPT-3 despite being a superior model with much higher performance. Since the GPT-4 release in March 2023, OpenAI prices have fallen another six times for input data and four times for output data with GPT-4o, released May 13, 2024. Of course, an analysis of this sort assumes that generation is sold at cost or a fixed profit, which is probably not true, and significant capital injections and negative margins for capturing market share have likely subsidized some of this. However, we doubt these levers explain all the improvement gains and price reductions. Even Gemini 1.5 Flash, released May 24, 2024, offers performance near GPT-4, costing about 85 times less for input data and 57 times less for output data than the original GPT-4. Although eliminating technology lock-in may not be possible, businesses can reduce their grip on technology adoption by using commercial models in the short run.


Staying Ahead: Key Cloud-Native Security Practices

NHIs represent machine identities used in cybersecurity. They are conceived by combining a “Secret” (an encrypted password, token, or key) and the permissions allocated to that Secret by a receiving server. In an increasingly digital landscape, the role of these machine identities and their secrets cannot be overstated. This makes the management of NHIs a top priority for organizations, particularly those in industries like financial services, healthcare, and travel. ... As technology has advanced, so too has the need for more thorough and advanced cybersecurity practices. One rapidly evolving area is the management of Non-Human Identities (NHIs), which undeniably interweaves secret data. Understanding and efficiently managing NHIs and their secrets are not just choices but an imperative for organizations operating in the digital space and leaned towards cloud-native applications. NHIs have been sharing their secrets with us for some time, communicating an urgent requirement for attention, understanding and improved security practices. They give us hints about potential security weaknesses through unique identifiers that are not unlike a travel passport. By monitoring, managing, and securely storing these identifiers and the permissions granted to them, we can bridge the troublesome chasm between the security and R&D teams, making for better-protected organizations.


3 promises every CIO should keep in 2025

To minimize disappointment, technologists need to set the expectations of business leaders. At the same time, they need to evangelize on the value of new technology. “The CIO has to be an evangelist, educator, and realist all at the same time,” says Fernandes. “IT leaders should be under-hypers rather than over-hypers, and promote technology only in the context of business cases.” ... According to Leon Roberge, CIO for Toshiba America Business Solutions and Toshiba Global Commerce Solutions, technology leaders should become more visible to the business and lead by example to their teams. “I started attending the business meetings of all the other C-level executives on a monthly basis to make sure I’m getting the voice of the business,” he says. “Where are we heading? How are we making money? How can I help business leaders overcome their challenges and meet their objectives?” ... CIOs should also build platforms for custom tools that meet the specific needs not only of their industry and geography, but of their company — and even for specific divisions. AI models will be developed differently for different industries, and different data will be used to train for the healthcare industry than for logistics, for example. Each company has its own way of doing business and its own data sets. 


5G in Business: Roadblocks, Catalysts in Adoption - Part 1

Enterprises considering 5G adoption are confronted with several challenges, key among them being high capex, security, interoperability and integration with existing infrastructure, and skills development within their workforce. Consistent coverage and navigating the complex regulatory landscape are also inhibitors to adoption. Jenn Mullen, emerging technology solutions lead at Keysight Technologies, told ISMG that business leaders must address potential security concerns, ensure seamless integration with existing IT infrastructure and demonstrate a strong return on investment. ... Early enterprise 5G projects were unsuccessful as the applications and devices weren't 5G compatible. For instance, in 2021, ArcelorMittal France conceived 5G Steel, a private cellular network serving its steelworks in Dunkirk, Mardyck and Florange (France) - to support its digitalization plans with high-speed, site-wide 5G connectivity. The private network, which covers a 10 square kilometer area, was built by French public network operator Orange. When it turned the network on in October 2022, the connecting devices were only 4G, leading to underutilization. "The availability of 5G-compatible terminals suitable for use in an industrial environment is too limited," said David Glijer, the company's director of digital transformation at the time.


Rethinking Business Models With AI

We arrive in a new era of transforming business models and organizations by leveraging the power of Gen AI. An AI-powered business model is an organizational framework that fundamentally integrates AI into one or more core aspects of how a company creates, delivers and captures value. Unlike traditional business models that merely use AI as a tool for optimization, a truly AI-powered business model exhibits distinctive characteristics, such as self-reinforcing intelligence, scalable personalization and ecosystem integration. ... As an organization moves through its AI-powered business model innovation journey, it must systematically consider the eight essentials of AI-driven business models (Figure 3) and include a holistic assessment of current state capabilities, identification of AI innovation opportunities and development of a well-defined map of the transformation journey. Following this, rapid innovation sprints should be conducted to translate strategic visions into tangible results that validate the identified AI opportunities and de-risk at-scale deployments. ... While the potential rewards are compelling — from operational efficiencies to entirely new value propositions — the journey is complex and fraught with pitfalls, not least from existing barriers. 


Increase in cyberattacks setting the stage for identity security’s rapid growth

Digital identity security is rapidly growing in importance as identity infrastructure becomes a target for cyber attackers. Misconfigurations of identity systems have become a significant concern – but many companies still seem unaware of the issue. Security expert Hed Kovetz says that “identity is always the go-to of every attacker.” As CEO and co-founder of digital identity protection firm Silverfort, he believes that protecting identity is one of their most complicated tasks. “If you ask any security team, I think identity is probably the one that is the most complex,” says Kovetz. “It’s painful: There are so many tools, so many legacy technologies and legacy infrastructure still in place.” ... To secure identity infrastructures, security specialists need to deal with both very old and very new technologies consistently. Kovetz says he first began dealing with legacy systems that could not be properly secured and could be used by attackers to spread inside the network. He later extended to protecting and other modern technologies. “I think that protecting these things end to end is the key,” says Kovetz. “Otherwise, attackers will always go to the weaker part.” ... Although the increase in cyberattacks is setting the stage for identity security’s rapid growth in importance, some organizations are still struggling to acknowledge weaknesses in their identity infrastructure.



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley