Showing posts with label 5G. Show all posts
Showing posts with label 5G. Show all posts

Daily Tech Digest - July 02, 2025


Quote for the day:

"Success is not the absence of failure; it's the persistence through failure." -- Aisha Tyle


How cybersecurity leaders can defend against the spur of AI-driven NHI

Many companies don’t have lifecycle management for all their machine identities and security teams may be reluctant to shut down old accounts because doing so might break critical business processes. ... Access-management systems that provide one-time-use credentials to be used exactly when they are needed are cumbersome to set up. And some systems come with default logins like “admin” that are never changed. ... AI agents are the next step in the evolution of generative AI. Unlike chatbots, which only work with company data when provided by a user or an augmented prompt, agents are typically more autonomous, and can go out and find needed information on their own. This means that they need access to enterprise systems, at a level that would allow them to carry out all their assigned tasks. “The thing I’m worried about first is misconfiguration,” says Yageo’s Taylor. If an AI agent’s permissions are set incorrectly “it opens up the door to a lot of bad things to happen.” Because of their ability to plan, reason, act, and learn AI agents can exhibit unpredictable and emergent behaviors. An AI agent that’s been instructed to accomplish a particular goal might find a way to do it in an unanticipated way, and with unanticipated consequences. This risk is magnified even further, with agentic AI systems that use multiple AI agents working together to complete bigger tasks, or even automate entire business processes. 


The silent backbone of 5G & beyond: How network APIs are powering the future of connectivity

Network APIs are fueling a transformation by making telecom networks programmable and monetisable platforms that accelerate innovation, improve customer experiences, and open new revenue streams.  ... Contextual intelligence is what makes these new-generation APIs so attractive. Your needs change significantly depending on whether you’re playing a cloud game, streaming a match, or participating in a remote meeting. Programmable networks can now detect these needs and adjust dynamically. Take the example of a user streaming a football match. With network APIs, a telecom operator can offer temporary bandwidth boosts just for the game’s duration. Once it ends, the network automatically reverts to the user’s standard plan—no friction, no intervention. ... Programmable networks are expected to have the greatest impact in Industry 4.0, which goes beyond consumer applications. ... 5G combined IOT and with network APIs enables industrial systems to become truly connected and intelligent. Remote monitoring of manufacturing equipment allows for real-time maintenance schedule adjustments based on machine behavior. Over a programmable, secure network, an API-triggered alert can coordinate a remote diagnostic session and even start remedial actions if a fault is found.


Quantum Computers Just Reached the Holy Grail – No Assumptions, No Limits

A breakthrough led by Daniel Lidar, a professor of engineering at USC and an expert in quantum error correction, has pushed quantum computing past a key milestone. Working with researchers from USC and Johns Hopkins, Lidar’s team demonstrated a powerful exponential speedup using two of IBM’s 127-qubit Eagle quantum processors — all operated remotely through the cloud. Their results were published in the prestigious journal Physical Review X. “There have previously been demonstrations of more modest types of speedups like a polynomial speedup, says Lidar, who is also the cofounder of Quantum Elements, Inc. “But an exponential speedup is the most dramatic type of speed up that we expect to see from quantum computers.” ... What makes a speedup “unconditional,” Lidar explains, is that it doesn’t rely on any unproven assumptions. Prior speedup claims required the assumption that there is no better classical algorithm against which to benchmark the quantum algorithm. Here, the team led by Lidar used an algorithm they modified for the quantum computer to solve a variation of “Simon’s problem,” an early example of quantum algorithms that can, in theory, solve a task exponentially faster than any classical counterpart, unconditionally.


4 things that make an AI strategy work in the short and long term

Most AI gains came from embedding tools like Microsoft Copilot, GitHub Copilot, and OpenAI APIs into existing workflows. Aviad Almagor, VP of technology innovation at tech company Trimble, also notes that more than 90% of Trimble engineers use Github Copilot. The ROI, he says, is evident in shorter development cycles, and reduced friction in HR and customer service. Moreover, Trimble has introduced AI into their transportation management system, where AI agents optimize freight procurement by dynamically matching shippers and carriers. ... While analysts often lament the difficulty of showing short-term ROI for AI projects, these four organizations disagree — at least in part. Their secret: flexible thinking and diverse metrics. They view ROI not only as dollars saved or earned, but also as time saved, satisfaction increased, and strategic flexibility gained. London says that Upwave listens for customer signals like positive feedback, contract renewals, and increased engagement with AI-generated content. Given the low cost of implementing prebuilt AI models, even modest wins yield high returns. For example, if a customer cites an AI-generated feature as a reason to renew or expand their contract, that’s taken as a strong ROI indicator. Trimble uses lifecycle metrics in engineering and operations. For instance, one customer used Trimble AI tools to reduce the time it took to perform a tunnel safety analysis from 30 minutes to just three.


How IT Leaders Can Rise to a CIO or Other C-level Position

For any IT professional who aspires to become a CIO, the key is to start thinking like a business leader, not just a technologist, says Antony Marceles, a technology consultant and founder of software staffing firm Pumex. "This means taking every opportunity to understand the why behind the technology, how it impacts revenue, operations, and customer experience," he explained in an email. The most successful tech leaders aren't necessarily great technical experts, but they possess the ability to translate tech speak into business strategy, Marceles says, adding that "Volunteering for cross-functional projects and asking to sit in on executive discussions can give you that perspective." ... CIOs rarely have solo success stories; they're built up by the teams around them, Marceles says. "Colleagues can support a future CIO by giving honest feedback, nominating them for opportunities, and looping them into strategic conversations." Networking also plays a pivotal role in career advancement, not just for exposure, but for learning how other organizations approach IT leadership, he adds. Don't underestimate the power of having an executive sponsor, someone who can speak to your capabilities when you’re not there to speak for yourself, Eidem says. "The combination of delivering value and having someone champion that value -- that's what creates real upward momentum."


SLMs vs. LLMs: Efficiency and adaptability take centre stage

SLMs are becoming central to Agentic AI systems due to their inherent efficiency and adaptability. Agentic AI systems typically involve multiple autonomous agents that collaborate on complex, multi-step tasks and interact with environments. Fine-tuning methods like Reinforcement Learning (RL) effectively imbue SLMs with task-specific knowledge and external tool-use capabilities, which are crucial for agentic operations. This enables SLMs to be efficiently deployed for real-time interactions and adaptive workflow automation, overcoming the prohibitive costs and latency often associated with larger models in agentic contexts. ... Operating entirely on-premises ensures that decisions are made instantly at the data source, eliminating network delays and safeguarding sensitive information. This enables timely interpretation of equipment alerts, detection of inventory issues, and real-time workflow adjustments, supporting faster and more secure enterprise operations. SLMs also enable real-time reasoning and decision-making through advanced fine-tuning, especially Reinforcement Learning. RL allows SLMs to learn from verifiable rewards, teaching them to reason through complex problems, choose optimal paths, and effectively use external tools. 


Quantum’s quandary: racing toward reality or stuck in hyperbole?

One important reason is for researchers to demonstrate their advances and show that they are adding value. Quantum computing research requires significant expenditure, and the return on investment will be substantial if a quantum computer can solve problems previously deemed unsolvable. However, this return is not assured, nor is the timeframe for when a useful quantum computer might be achievable. To continue to receive funding and backing for what ultimately is a gamble, researchers need to show progress — to their bosses, investors, and stakeholders. ... As soon as such announcements are made, scientists and researchers scrutinize them for weaknesses and hyperbole. The benchmarks used for these tests are subject to immense debate, with many critics arguing that the computations are not practical problems or that success in one problem does not imply broader applicability. In Microsoft’s case, a lack of peer-reviewed data means there is uncertainty about whether the Majorana particle even exists beyond theory. The scientific method encourages debate and repetition, with the aim of reaching a consensus on what is true. However, in quantum computing, marketing hype and the need to demonstrate advancement take priority over the verification of claims, making it difficult to place these announcements in the context of the bigger picture.


Ethical AI for Product Owners and Product Managers

As the product and customer information steward, the PO/PM must lead the process of protecting sensitive data. The Product Backlog often contains confidential customer feedback, competitive analysis, and strategic plans that cannot be exposed. This guardrail requires establishing clear protocols for what data can be shared with AI tools. A practical first step is to lead the team in a data classification exercise, categorizing information as Public, Internal, or Restricted. Any data classified for internal use, such as direct customer quotes, must be anonymized before being used in an AI prompt. ... AI is proficient at generating text but possesses no real-world experience, empathy, or strategic insight. This guardrail involves proactively defining the unique, high-value work that AI can assist but never replace. Product leaders should clearly delineate between AI-optimal tasks, creating first drafts of technical user stories, summarizing feedback themes, or checking for consistency across Product Backlog items and PO/PM-essential areas. These human-centric responsibilities include building genuine empathy through stakeholder interviews, making difficult strategic prioritization trade-offs, negotiating scope, resolving conflicting stakeholder needs, and communicating the product vision. By modeling this partnership and using AI as an assistant to prepare for strategic work, the PO/PM reinforces that their core value lies in strategy, relationships, and empathy.


Sharded vs. Distributed: The Math Behind Resilience and High Availability

In probability theory, independent events are events whose outcomes do not affect each other. For example, when throwing four dice, the number displayed on each dice is independent of the other three dice. Similarly, the availability of each server in a six-node application-sharded cluster is independent of the others. This means that each server has an individual probability of being available or unavailable, and the failure of one server is not affected by the failure or otherwise of other servers in the cluster. In reality, there may be shared resources or shared infrastructure that links the availability of one server to another. In mathematical terms, this means that the events are dependent. However, we consider the probability of these types of failures to be low, and therefore, we do not take them into account in this analysis.  ... Traditional architectures are limited by single-node failure risk. Application-level sharding compounds this problem because if any node goes down, its shard and therefore the total system becomes unavailable. In contrast, distributed databases with quorum-based consensus (like YugabyteDB) provide fault tolerance and scalability, enabling higher resilience and improved availability.


How FinTechs are turning GRC into a strategic enabler

The misconception that risk management and innovation exist in tension is one that modern FinTechs must move beyond. At its core, cybersecurity – when thoughtfully integrated – serves not as a brake but as an enabler of innovation. The key is to design governance structures that are both intelligent and adaptive (and resilient in itself). The foundation lies in aligning cybersecurity risk management with the broader business objective: enablement. This means integrating security thinking early in the innovation cycle, using standardized interfaces, expectations, and frameworks that don’t obstruct, but rather channel innovation safely. For instance, when risk statements are defined consistently across teams, decisions can be made faster and with greater confidence. Critically, it starts with the threat model. A well-defined, enterprise-level threat model is the compass that guides risk assessments and controls where they matter most. Yet many companies still operate without a clear articulation of their own threat landscape, leaving their enterprise risk strategies untethered from reality. Without this grounding, risk management becomes either overly cautious or blindly permissive, or a bit of both. We place a strong emphasis on bridging the traditional silos between GRC, IT Security, Red Teaming, and Operational teams.

Daily Tech Digest - January 30, 2025


Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley


Doing authentication right

Like encryption, authentication is one of those things that you are tempted to “roll your own” but absolutely should not. The industry has progressed enough that you should definitely “buy and not build” your authentication solution. Plenty of vendors offer easy-to-implement solutions and stay diligently on top of the latest security issues. Authentication also becomes a tradeoff between security and a good user experience. ... Passkeys are a relatively new technology and there is a lot of FUD floating around out there about them. The bottom line is that they are safe, secure, and easy for your users. They should be your primary way of authenticating. Several vendors make implementing passkeys not much harder than inserting a web component in your application. ... Forcing users to use hard-to-remember passwords means they will be more likely to write them down or use a simple password that meets the requirements. Again, it may seem counterintuitive, but XKCD has it right. In addition, the longer the password, the harder it is to crack. Let your users create long, easy-to-remember passwords rather than force them to use shorter, difficult-to-remember passwords. ... Six digits is the outer limit for OTP links, and you should consider shorter ones. Under no circumstances should you require OTPs longer than six digits because they are vastly harder for users to keep in short-term memory.


Augmenting Software Architects with Artificial Intelligence

Technical debt is mistakenly thought of as just a source code problem, but the concept is also applicable to source data (this is referred to as data debt) as well as your validation assets. AI has been used for years to analyze existing systems to identify potential opportunities to improve the quality (to pay down technical debt). SonarQube, CAST SQG and BlackDuck’s Coverity Static Analysis statically analyze existing code. Applitools Visual AI dynamically finds user interface (UI) bugs and Veracode’s DAST to find runtime vulnerabilities in web apps. The advantages of this use case are that it pinpoints aspects of your implementation that potentially should be improved. As described earlier, AI tooling offers to the potential for greater range, thoroughness, and trustworthiness of the work products as compared with that of people. Drawbacks to using AI-tooling to identify technical debt include the accuracy, IP, and privacy risks described above. ... As software architects we regularly work with legacy implementations that they need to leverage and often evolve. This software is often complex, using a myriad of technologies for reasons that have been forgotten over time. Tools such as CAST Imaging visualizes existing code and ChartDB visualizes legacy data schemas to provide a “birds-eye view” of the actual situation that you face.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

Your first step should be to evaluate the state of your company’s cyber defenses, including communications and IT infrastructure, and the cybersecurity measures you already have in place—identifying any vulnerabilities and gaps. One vulnerability to watch for is a dependence on multiple security platforms, patches, policies, hardware, and software, where a lack of tight integration can create gaps that hackers can readily exploit. Consider using operational resilience assessment software as part of the exercise, and if you lack the internal know-how or resources to manage the assessment, consider enlisting a third-party operational resilience risk consultant. ... Aging network communications hardware and software, including on-premises systems and equipment, are top targets for hackers during a disaster because they often include a single point of failure that’s readily exploitable. The best counter in many cases is to move the network and other key communications infrastructure (a contact center, for example) to the cloud. Not only do cloud-based networks such as SD-WAN, (software-defined wide area network) have the resilience and flexibility to preserve connectivity during a disaster, they also tend to come with built-in cybersecurity measures.


California’s AG Tells AI Companies Practically Everything They’re Doing Might Be Illegal

“The AGO encourages the responsible use of AI in ways that are safe, ethical, and consistent with human dignity,” the advisory says. “For AI systems to achieve their positive potential without doing harm, they must be developed and used ethically and legally,” it continues, before dovetailing into the many ways in which AI companies could, potentially, be breaking the law. ... There has been quite a lot of, shall we say, hyperbole, when it comes to the AI industry and what it claims it can accomplish versus what it can actually accomplish. Bonta’s office says that, to steer clear of California’s false advertising law, companies should refrain from “claiming that an AI system has a capability that it does not; representing that a system is completely powered by AI when humans are responsible for performing some of its functions; representing that humans are responsible for performing some of a system’s functions when AI is responsible instead; or claiming without basis that a system is accurate, performs tasks better than a human would, has specified characteristics, meets industry or other standards, or is free from bias.” ... Bonta’s memo clearly illustrates what a legal clusterfuck the AI industry represents, though it doesn’t even get around to mentioning U.S. copyright law, which is another legal gray area where AI companies are perpetually running into trouble.


Knowledge graphs: the missing link in enterprise AI

Knowledge graphs are a layer of connective tissue that sits on top of raw data stores, turning information into contextually meaningful knowledge. So in theory, they’d be a great way to help LLMs understand the meaning of corporate data sets, making it easier and more efficient for companies to find relevant data to embed into queries, and making the LLMs themselves faster and more accurate. ... Knowledge graphs reduce hallucinations, he says, but they also help solve the explainability challenge. Knowledge graphs sit on top of traditional databases, providing a layer of connection and deeper understanding, says Anant Adya, EVP at Infosys. “You can do better contextual search,” he says. “And it helps you drive better insights.” Infosys is now running proof of concepts to use knowledge graphs to combine the knowledge the company has gathered over many years with gen AI tools. ... When a knowledge graph is used as part of the RAG infrastructure, explicit connections can be used to quickly zero in on the most relevant information. “It becomes very efficient,” said Duvvuri. And companies are taking advantage of this, he says. “The hard question is how many of those solutions are seen in production, which is quite rare. But that’s true of a lot of gen AI applications.”


U.S. Copyright Office says AI generated content can be copyrighted — if a human contributes to or edits it

The Copyright Office determined that prompts are generally instructions or ideas rather than expressive contributions, which are required for copyright protection. Thus, an image generated with a text-to-image AI service such as Midjourney or OpenAI’s DALL-E 3 (via ChatGPT), on its own could not qualify for copyright protection. However, if the image was used in conjunction with a human-authored or human-edited article (such as this one), then it would seem to qualify. Similarly, for those looking to use AI video generation tools such as Runway, Pika, Luma, Hailuo, Kling, OpenAI Sora, Google Veo 2 or others, simply generating a video clip based on a description would not qualify for copyright. Yet, a human editing together multiple AI generated video clips into a new whole would seem to qualify. The report also clarifies that using AI in the creative process does not disqualify a work from copyright protection. If an AI tool assists an artist, writer or musician in refining their work, the human-created elements remain eligible for copyright. This aligns with historical precedents, where copyright law has adapted to new technologies such as photography, film and digital media. ... While some had called for additional protections for AI-generated content, the report states that existing copyright law is sufficient to handle these issues.


From connectivity to capability: The next phase of private 5G evolution

Faster connectivity is just one positive aspect of private 5G networks; they are the basis of the current digital era. These networks outperform conventional public 5G capabilities, giving businesses incomparable control, security, and flexibility. For instance, private 5G is essential to the seamless connection of billions of devices, ensuring ultra-low latency and excellent reliability in the worldwide IoT industry, which has the potential to reach $650.5 billion by 2026, as per Markets and Markets. Take digital twins, for example—virtual replicas of physical environments such as factories or entire cities. These replicas require real-time data streaming and ultra-reliable bandwidth to function effectively. Private 5G enables this by delivering consistent performance, turning theoretical models into practical tools that improve operational efficiency and decision-making. ... Also, for sectors that rely on efficiency and precision, the private 5G is making big improvements in this area. For instance, in the logistics sector, it connects fleets, warehouses, and ports with fast, low-latency networks, streamlining operations throughout the supply chain. In fleet management, private 5G allows real-time tracking of vehicles, improving route planning and fuel use. 


American CISOs should prepare now for the coming connected-vehicle tech bans

The rule BIS released is complex and intricate and relies on many pre-existing definitions and policies used by the Commerce Department for different commercial and industrial matters. However, in general, the restrictions and compliance obligations under the rule affect the entire US automotive industry, including all-new, on-road vehicles sold in the United States (except commercial vehicles such as heavy trucks, for which rules will be determined later.) All companies in the automotive industry, including importers and manufacturers of CVs, equipment manufacturers, and component suppliers, will be affected. BIS said it may grant limited specific authorizations to allow mid-generation CV manufacturers to participate in the rule’s implementation period, provided that the manufacturers can demonstrate they are moving into compliance with the next generation. ... Connected vehicles and related component suppliers are required to scrutinize the origins of vehicle connectivity systems (VCS) hardware and automated driving systems (ADS) software to ensure compliance. Suppliers must exclude components with links to the PRC or Russia, which has significant implications for sourcing practices and operational processes.


What to know about DeepSeek AI, from cost claims to data privacy

"Users need to be aware that any data shared with the platform could be subject to government access under China's cybersecurity laws, which mandate that companies provide access to data upon request by authorities," Adrianus Warmenhoven, a member of NordVPN's security advisory board, told ZDNET via email. According to some observers, the fact that R1 is open-source means increased transparency, giving users the opportunity to inspect the model's source code for signs of privacy-related activity. Regardless, DeepSeek also released smaller versions of R1, which can be downloaded and run locally to avoid any concerns about data being sent back to the company (as opposed to accessing the chatbot online). ... "DeepSeek's new AI model likely does use less energy to train and run than larger competitors' models," confirms Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "However, I doubt this marks the start of a long-term trend in lower energy consumption. AI's power stems from data, algorithms, and compute -- which rely on ever-improving chips. When developers have previously found ways to be more efficient, they have typically reinvested those gains into making even bigger, more powerful models, rather than reducing overall energy usage."


The AI Imperative: How CIOs Can Lead the Charge

For CIOs, AGI will take this to the next level. Imagine systems that don't just fix themselves but also strategize, optimize and innovate. AGI could automate 90% of IT operations, freeing up teams to focus on strategic initiatives. It could revolutionize cybersecurity by anticipating and neutralizing threats before they strike. It could transform data into actionable insights, driving smarter decisions across the organization. The key is to begin incrementally, prove the value and scale strategically. AGI isn't just a tool; it's a game-changer. ... Cybersecurity risks are real and imminent. Picture this: you're using an open-source AI model and suddenly, your system gets hacked. Turns out, a malicious contributor slipped in some rogue code. Sounds like a nightmare, right? Open-source AI is powerful, but has its fair share of risks. Vulnerabilities in the code, supply chain attacks and lack of appropriate vendor support are absolutely real concerns. But this is true for any new technology. With the right safeguards, we can minimize and mitigate these risks. Here's what I recommend: Regularly review and update open-source libraries. CIOs should encourage their teams to use tools like software composition analysis to detect suspicious changes. Train your team to manage and secure open-source AI deployments. 

Daily Tech Digest - January 22, 2025

How Operating Models Need to Evolve in 2025

“In 2025, enterprises are looking to achieve autonomous and self-healing IT environments, which is currently referred to as ‘AIOps.’ However, the use of AI will become so common in IT operations that we won’t need to call it [that] explicitly,” says Ruh in an email interview. “Instead, the term, ‘AIOps’ will become obsolete over the next two years as enterprises move towards the first wave of AI agents, where early adopters will start deploying intelligent components in their landscape able to reason and take care of tasks with an elevated level of autonomy.” ... “The IT operating model of 2025 must adapt to a landscape shaped by rapid decentralization, flatter structures, and AI-driven innovation,” says Langley in an email interview. “These shifts are driven by the need for agility in responding to changing business needs and the transformative impact of AI on decision-making, coordination and communication. Technology is no longer just a tool but a connective tissue that enables transparency and autonomy across teams while aligning them with broader organizational goals.” ... “IT leaders must transition from traditional hierarchical roles to facilitators who harness AI to enable autonomy while maintaining strategic alignment. This means creating systems for collaboration and clarity, ensuring the organization thrives in a decentralized environment,” says Langley.


Cybersecurity is tough: 4 steps leaders can take now to reduce team burnout

Whether it’s about solidifying partnerships with business managers, changing corporate culture, or correcting errant employees, peer input is golden. No matter the scenario, it’s likely that other security leaders have dealt with the same or similar situations, so their input, empathy, and advice are invaluable. ... Well-informed leaders are more likely to champion and include security in new initiatives, an important shift in culture from seeing security as a pain to embracing security as an important business tool. Such a shift greatly reduces another top stressor among CISO’s — lack of management support. In a security-centric organization, team members in all roles experience less pressure to perform miracles with no resources. And, instead of fighting with leaders for resources, the CISO has more time to focus on getting to know and better manage staff. ... Recognition, she says, boosts individual and team morale and motivation. “I am grateful for and do not take for granted having excellent leadership above me that supports me and my team. I try to make it easy for them.” And, since personal stressors also impact burnout, she encourages team members to share their personal stressors at her one-on-ones or in the group meeting where they can be supported.  


Mandatory MFA, Biometrics Make Headway in Middle East, Africa

Digital identity platforms, such as UAE Pass in the United Arab Emirates and Nafath in Saudi Arabia, integrate with existing fingerprint and facial-recognition systems and can reduce the reliance on passwords, says Chris Murphy, a managing director with the cybersecurity practice at FTI Consulting in Dubai. "With mobile devices serving as the primary gateway to digital services, smartphone-based biometric authentication is the most widely used method in public and private sectors," he says. "Some countries, such as the UAE and Saudi Arabia, are early adopters of passwordless authentication, leveraging AI-based facial recognition and behavioral analytics for seamless and secure identity verification." African nations have also rolled out national identity cards based on biometrics. In South Africa, for example, customers can walk into a bank and open an account by using their fingerprint and linking it to the national ID database, which acts as the root of trust, says BIO-Key's Sullivan. "After they verify that that person is who they say they are with the Home Affairs Ministry, they can store that fingerprint [in the system]," he says. "From then on, anytime they want to authenticate that user, they just touch a finger. They've just now started rolling out the ability to do that without even presenting your card for subsequent business."


Acronis CISO on why backup strategies fail and how to make them resilient

Start by conducting a thorough business impact analysis. Figure out which processes, applications, and data sets are mission-critical, and decide how much downtime or data loss is acceptable. The more vital the data or application, the tighter (and more expensive) your RTO and RPO targets will be. Having a strong data and systems classification system will make this process significantly easier. There’s always a trade-off: the more stringent your RTO and RPO, the higher the cost and complexity of maintaining the necessary backup infrastructure. That’s why prioritisation is key. For example, a real-time e-commerce database might need near-zero downtime, while archived records can tolerate days of recovery time. Once you establish your priorities, you can use technologies like incremental backups, continuous data protection, and cross-site replication to meet tighter RTO and RPO without overwhelming your network or your budget. ... Start by reviewing any regulatory or compliance rules you must follow; these often dictate which data must be kept and for how long. Keep in mind, that some information may not be kept longer than absolutely needed – personally identifiable information would come to mind. Next, look at the operational value of your data. 


The bitter lesson for generative AI adoption

The rapid pace of innovation and the proliferation of new models have raised concerns about technology lock-in. Lock-in occurs when businesses become overly reliant on a specific model with bespoke scaffolding that limits their ability to adapt to innovations. Upon its release, GPT-4 was the same cost as GPT-3 despite being a superior model with much higher performance. Since the GPT-4 release in March 2023, OpenAI prices have fallen another six times for input data and four times for output data with GPT-4o, released May 13, 2024. Of course, an analysis of this sort assumes that generation is sold at cost or a fixed profit, which is probably not true, and significant capital injections and negative margins for capturing market share have likely subsidized some of this. However, we doubt these levers explain all the improvement gains and price reductions. Even Gemini 1.5 Flash, released May 24, 2024, offers performance near GPT-4, costing about 85 times less for input data and 57 times less for output data than the original GPT-4. Although eliminating technology lock-in may not be possible, businesses can reduce their grip on technology adoption by using commercial models in the short run.


Staying Ahead: Key Cloud-Native Security Practices

NHIs represent machine identities used in cybersecurity. They are conceived by combining a “Secret” (an encrypted password, token, or key) and the permissions allocated to that Secret by a receiving server. In an increasingly digital landscape, the role of these machine identities and their secrets cannot be overstated. This makes the management of NHIs a top priority for organizations, particularly those in industries like financial services, healthcare, and travel. ... As technology has advanced, so too has the need for more thorough and advanced cybersecurity practices. One rapidly evolving area is the management of Non-Human Identities (NHIs), which undeniably interweaves secret data. Understanding and efficiently managing NHIs and their secrets are not just choices but an imperative for organizations operating in the digital space and leaned towards cloud-native applications. NHIs have been sharing their secrets with us for some time, communicating an urgent requirement for attention, understanding and improved security practices. They give us hints about potential security weaknesses through unique identifiers that are not unlike a travel passport. By monitoring, managing, and securely storing these identifiers and the permissions granted to them, we can bridge the troublesome chasm between the security and R&D teams, making for better-protected organizations.


3 promises every CIO should keep in 2025

To minimize disappointment, technologists need to set the expectations of business leaders. At the same time, they need to evangelize on the value of new technology. “The CIO has to be an evangelist, educator, and realist all at the same time,” says Fernandes. “IT leaders should be under-hypers rather than over-hypers, and promote technology only in the context of business cases.” ... According to Leon Roberge, CIO for Toshiba America Business Solutions and Toshiba Global Commerce Solutions, technology leaders should become more visible to the business and lead by example to their teams. “I started attending the business meetings of all the other C-level executives on a monthly basis to make sure I’m getting the voice of the business,” he says. “Where are we heading? How are we making money? How can I help business leaders overcome their challenges and meet their objectives?” ... CIOs should also build platforms for custom tools that meet the specific needs not only of their industry and geography, but of their company — and even for specific divisions. AI models will be developed differently for different industries, and different data will be used to train for the healthcare industry than for logistics, for example. Each company has its own way of doing business and its own data sets. 


5G in Business: Roadblocks, Catalysts in Adoption - Part 1

Enterprises considering 5G adoption are confronted with several challenges, key among them being high capex, security, interoperability and integration with existing infrastructure, and skills development within their workforce. Consistent coverage and navigating the complex regulatory landscape are also inhibitors to adoption. Jenn Mullen, emerging technology solutions lead at Keysight Technologies, told ISMG that business leaders must address potential security concerns, ensure seamless integration with existing IT infrastructure and demonstrate a strong return on investment. ... Early enterprise 5G projects were unsuccessful as the applications and devices weren't 5G compatible. For instance, in 2021, ArcelorMittal France conceived 5G Steel, a private cellular network serving its steelworks in Dunkirk, Mardyck and Florange (France) - to support its digitalization plans with high-speed, site-wide 5G connectivity. The private network, which covers a 10 square kilometer area, was built by French public network operator Orange. When it turned the network on in October 2022, the connecting devices were only 4G, leading to underutilization. "The availability of 5G-compatible terminals suitable for use in an industrial environment is too limited," said David Glijer, the company's director of digital transformation at the time.


Rethinking Business Models With AI

We arrive in a new era of transforming business models and organizations by leveraging the power of Gen AI. An AI-powered business model is an organizational framework that fundamentally integrates AI into one or more core aspects of how a company creates, delivers and captures value. Unlike traditional business models that merely use AI as a tool for optimization, a truly AI-powered business model exhibits distinctive characteristics, such as self-reinforcing intelligence, scalable personalization and ecosystem integration. ... As an organization moves through its AI-powered business model innovation journey, it must systematically consider the eight essentials of AI-driven business models (Figure 3) and include a holistic assessment of current state capabilities, identification of AI innovation opportunities and development of a well-defined map of the transformation journey. Following this, rapid innovation sprints should be conducted to translate strategic visions into tangible results that validate the identified AI opportunities and de-risk at-scale deployments. ... While the potential rewards are compelling — from operational efficiencies to entirely new value propositions — the journey is complex and fraught with pitfalls, not least from existing barriers. 


Increase in cyberattacks setting the stage for identity security’s rapid growth

Digital identity security is rapidly growing in importance as identity infrastructure becomes a target for cyber attackers. Misconfigurations of identity systems have become a significant concern – but many companies still seem unaware of the issue. Security expert Hed Kovetz says that “identity is always the go-to of every attacker.” As CEO and co-founder of digital identity protection firm Silverfort, he believes that protecting identity is one of their most complicated tasks. “If you ask any security team, I think identity is probably the one that is the most complex,” says Kovetz. “It’s painful: There are so many tools, so many legacy technologies and legacy infrastructure still in place.” ... To secure identity infrastructures, security specialists need to deal with both very old and very new technologies consistently. Kovetz says he first began dealing with legacy systems that could not be properly secured and could be used by attackers to spread inside the network. He later extended to protecting and other modern technologies. “I think that protecting these things end to end is the key,” says Kovetz. “Otherwise, attackers will always go to the weaker part.” ... Although the increase in cyberattacks is setting the stage for identity security’s rapid growth in importance, some organizations are still struggling to acknowledge weaknesses in their identity infrastructure.



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley

Daily Tech Digest - December 23, 2023

How LLMs made their way into the modern data stack in 2023

Beyond helping teams generate insights and answers from their data through text inputs, LLMs are also handling traditionally manual data management and the data efforts crucial to building a robust AI product. In May, Intelligent Data Management Cloud (IDMC) provider Informatica debuted Claire GPT, a multi-LLM-based conversational AI tool that allows users to discover, interact with and manage their IDMC data assets with natural language inputs. It handles multiple jobs within the IDMC platform, including data discovery, data pipeline creation and editing, metadata exploration, data quality and relationships exploration, and data quality rule generation. Then, to help teams build AI offerings, California-based Refuel AI provides a purpose-built large language model that helps with data labeling and enrichment tasks. A paper published in October 2023 also shows that LLMs can do a good job at removing noise from datasets, which is also a crucial step in building robust AI. Other areas in data engineering where LLMs can come into play are data integration and orchestration. 


Corporate governance in 2023: a year in review

2023 has seen a continuing trend of more responsibilities for directors. Often, this responsibility comes from regulators; sometimes, it comes from investors or other stakeholders. One thing is certain, though: directors are rapidly losing any remaining wiggle room to be “rubber-stamp” individuals. Modern board roles carry serious accountability; many directors are starting to appreciate that and adhere to new standards. The trouble is sometimes the new standard overstretch the director – so much so that we now have concerns about overboarding, exhaustion, and undue stress. How will that play out if the trend of more responsibility continues? ... The board dismissed the evidently popular CEO Sam Altman in a decision made behind closed doors with utmost secrecy. And as the world’s attention predictably turned their way, they could give no answers. Soon, Altman was rehired after around 70% of the company’s staff threatened to resign and join Microsoft (a significant OpenAI investor). The board subsequently agreed to undergo a major reshuffle for more accountability and transparent decision-making.


Quantum Computing’s Hard, Cold Reality Check

The problem isn’t just one of timescales. In May, Matthias Troyer, a technical fellow at Microsoft who leads the company’s quantum computing efforts, co-authored a paper in Communications of the ACM suggesting that the number of applications where quantum computers could provide a meaningful advantage was more limited than some might have you believe. “We found out over the last 10 years that many things that people have proposed don’t work,” he says. “And then we found some very simple reasons for that.” The main promise of quantum computing is the ability to solve problems far faster than classical computers, but exactly how much faster varies. There are two applications where quantum algorithms appear to provide an exponential speed up, says Troyer. One is factoring large numbers, which could make it possible to break the public key encryption the internet is built on. The other is simulating quantum systems, which could have applications in chemistry and materials science. Quantum algorithms have been proposed for a range of other problems including optimization, drug design, and fluid dynamics. 


Navigating the Data Landscape: The Crucial Role of Data Governance in Today’s Business Environment

Data quality management has become increasingly paramount as the volume of data exponentially raises day by day. Organizations can protect their data with policies and procedures, ensure that they follow all the rules and regulations, hire folks that understand the data you are collecting and what it means to their company but if that data isn’t high quality your organization may get the short end of the stick. Maybe you’re three weeks late for a TikTok trend or you miss out on a whole subset of customers because of the misstep with your collection methods, either way that profit loss and a chance to build on that data point in the future could be a pivotal misstep. Ensuring that your organization has processes to monitor and improve your data quality on a continuous basis will save your organization time and money in the long run. Despite its importance, implementing effective data governance comes with challenges. Organizations often face resistance to change, cultural barriers, and the complexity of managing diverse data sources.


Choosing Between Message Queues and Event Streams

There are numerous distinctions between technologies that allow you to implement event streaming and those that you can use for message queueing. To highlight them, I will compare Apache Kafka and RabbitMQ. I’ve chosen Kafka and RabbitMQ specifically because they are popular, widely used solutions providing rich capabilities that have been extensively battle-tested in production environments. ... Message queueing and event streaming can both be used in scenarios requiring decoupled, asynchronous communication between different parts of a system. For instance, in microservices architectures, both can power low-latency messaging between various components. However, going beyond messaging, event streaming and message queueing have distinct strengths and are best suited to different use cases. ... Message queueing is a good choice for many messaging use cases. It’s also an appealing proposition if you’re early in your event-driven journey; that’s because message queueing technologies are generally easier to deploy and manage than event streaming solutions. 


5G and edge computing: What they are and why you should care

Instead of relying solely on large, high-powered cell towers (as 4G does), 5G will run off both those towers and a ton of small cell sites that can be clustered together. This is how 5G achieves its population density. 5G is also supposed to be more energy efficient. As such, the communications component of IoT devices won't drain as much power, resulting in longer battery life for connected devices. There's also a ton of AI and machine learning in 5G implementations. 5G nodes and interface devices deployed on the edge, away from central hubs. They utilize AI and machine learning to analyze communications performance, and use AI to bandwidth-shape communications, to wring as much performance out of the hardware as possible. You're familiar with the term "cloud computing." We've all used cloud services, services that run on a server someplace rather than on our desktop computers or mobile devices. The cloud, of course, isn't really a cloud. Amazon, Google, Facebook, Microsoft, and others operate massive data centers packed with thousands upon thousands of servers. Soft and fluffy, the cloud is not.


Stolen Booking.com Credentials Fuel Social Engineering Scams

Social engineering expert Sharon Conheady said this type of trickery remains extremely difficult to repel, because of the customer-first nature of hospitality. Many public-facing people in such organizations, such as receptionists, are "trained to help people - that's their job," and of course they're going to bend over backwards to try to meet apparent customers' demands, Conheady said in an interview at this month's Black Hat Europe conference in London. Help desks remain another frequent target. "I had a client lately who asked me to call the help desk and obtain BitLocker keys," she said, referring to a recent penetration test. "Every single one of the help desk agents gave us the BitLocker key." That prompted her to ask: Do these personnel even know what a BitLocker key is, and why they shouldn't share it? The client said they didn't know. While training people in customer-facing roles can help, Conheady said the only truly effective approach would be to put in place strong technical controls to outright prevent and block such attacks.


Significantly Improving Security Posture: A CMMI Case Study

“Phoenix Defense has led the way in adopting CMMI Security best practices for nearly two decades, and now included the Security best practices,” says Kris Puthucode, Certified CMMI High Maturity Lead Appraiser at Software Quality Center LLC. “This adoption has yielded quantifiable benefits, enhancing security posture across Mission, Personnel, Physical, Process, and Cybersecurity domains. Additionally, incorporating Virtual work best practices has standardized virtual meetings and events, boosting efficiency.” Phoenix Defense has been a CMMI Performance Solutions Organization since 2005, first achieving Maturity Level 5 in 2020. ... Before adopting CMMI Security and Managing Security Threats and Vulnerabilities Practice Areas in the model, Phoenix Defense had a closed network with no outward-facing applications and relied on a third-party vendor to monitor threats and spam. They did not fully, quantitively track attacks against the networks or other data flows, and they required a more robust approach to properly ensure network security.


5 common data security pitfalls — and how to avoid them

While regulations like GDPR and SOX set standards for data security, they are merely starting points and should be considered table stakes for protecting data. Compliance should not be mistaken for complete data security, as robust security involves going beyond compliance checks. The fact is that many large data breaches have occurred in organizations that were fully compliant on paper. Moving beyond compliance requires actively identifying and mitigating risks rather than just ticking boxes during audits. ... Data is one of the most valuable assets for any organization. And yet, the question, “Who owns the data?” often leads to ambiguity within organizations. Clear delineation of data ownership and responsibility is crucial for effective data governance. Each team or employee must understand their role in protecting data to create a culture of security. ... Unpatched vulnerabilities are one of the easiest targets for cyber criminals. This means that organizations face significant risks when they can’t address public vulnerabilities quickly. Despite the availability of patches, many enterprises delay deployment for various reasons, which leaves sensitive data vulnerable.


Outmaneuvering AI: Cultivating Skills That Make Algorithms Scratch Their Head

Reasoning, the intellectual ninja of skills, is all about slicing through misinformation, assumptions, and biases to get to the heart of the matter. It’s not just drawing conclusions, but thinking about how we do that. This skill is the brain’s bouncer, keeping cognitive fallacies and hasty generalizations at bay. We humans, bless our hearts, are prone to jumping on the bandwagon or seeing patterns where there are none (like seeing a face on Mars or believing in hot streaks at Vegas). These mental shortcuts, or heuristics, can lead us astray, making reasoning not just useful but essential. AI is trained on our past reasoning reflected in old works. But it can’t reason on its own — at least not yet. Consider a business deciding whether to invest in a new technology. Without proper reasoning, they might follow the hype (everyone else is doing it!) or rely on gut feelings (it just feels right!). But with reasoning, they dissect the decision, weigh the evidence, consider alternatives, and make a choice that’s not just good on paper, but good in reality.



Quote for the day:

"Whether you think you can or you think you can’t, you’re right." -- Henry Ford

Daily Tech Digest - November 18, 2023

What You Need to Know About Securing 5G Networks and Communication

IoT devices have exploded over the past several years, and this growth shows no signs of slowing down. And all of these devices have one thing in common: Remote connectivity via a public 4G or 5G network, or, increasingly, a private 5G network. This explosion of connected devices creates an expanded attack surface, since the entire network is only as secure as its weakest link. Specifically, just because a network is secure, any devices attached to it that are not secure in how they communicate or receive updates create a breach opportunity. As a result, it’s essential that every device has an identity and each identity is managed. This might sound daunting, but it’s not as complex as it seems at first – it goes back to the building blocks of PKI. Much of the security industry has a handle on running PKI for enterprise networks in their organization (think laptops, mobile devices, and so on). Therefore, security teams are also enabled to do PKI for these smart devices — it’s the same approach for a different endpoint.


To AI Hell and Back: Finding Salvation Through Empathy

Iannopollo said the guides assisting in AI Hell could come from IT, marketing, or the executive team. “All of them understand the incredible opportunity of generative AI and the unparalleled transformative power of the new technology. And they know that without adequate security, privacy, and risk governance.” According to Forrester’s research, 36% of respondents in those groups said privacy and security are the greatest barriers to generative AI adoption, while another 31% said governance and risk were the biggest hurdle. Another 61% cited concerns that GenAI could violate privacy and data protection laws like the EU’s GDPR. “So, concerns exist,” she said. “But remember, Hell is a place of confusion.” As more frameworks come online -- more regulations, there may be less confusion and the guides will help businesses assess their AI adoption. ... Once you are out of AI Hell, like Dante, your story is not complete. Dante had to first stop in purgatory. And after spending time in AI Hell dealing with the questions of risk and threats, businesses will need to figure out a compliance strategy.


Conceptual vs. Logical vs. Physical Data Modeling

“Companies need to do Data Modeling to solve a specific business problem or answer a business question,” summarized Aiken. IT and businesses need to share goals and understanding to get to a data solution. Moreover, there needs to be a common language between systems for data to flow smoothly. However, slapping together any model or a big overarching enterprise architecture will not be helpful. A data model needs to achieve a particular purpose, and getting there requires a systematic process. Aiken’s three-dimensional model evolution framework provides resources for an improved data platform. It considers the existing architecture and the evolution needed to meet business needs and validates that stakeholders and builders are on the same page. A combination of conceptual, logical, and physical data models promises meaningful and useful results, especially where business and IT need to achieve a common objective. Doing the data modeling correctly and understanding requirements frees up 20% time and money for corporations to leverage their data capabilities and get more value from them.


AI: The indispensable ally in the information age

The implementation of AI in data centers must be viewed through a dual lens: risk mitigation and knowledge preservation. As we face a generational turnover in expertise within the industry, with a significant proportion of seasoned professionals retiring, there's an urgent need to capture and transfer this wealth of knowledge. AI and machine learning algorithms, when correctly trained and utilized, can play a crucial role in bridging this knowledge gap. By learning from clean data, and benchmarking and decisions made by experienced personnel, AI systems can emulate, and eventually, enhance these expert-driven processes. This transfer of knowledge is vital not just for maintaining current operational standards, but also for paving the way for more advanced, efficient, and resilient data center architectures. Moreover, AI's potential in managing and reducing operational risks in data centers is monumental. Advanced predictive analytics can foresee and mitigate potential failures, while continuous monitoring AI systems can identify anomalies that hint at future problems, allowing for preemptive maintenance and risk aversion.


Shadowy Hack-for-Hire Group Behind Sprawling Web of Global Cyberattacks

The cybersecurity firm's exhaustive analysis of data that Reuters journalists collected showed near-conclusive links between Appin and numerous data theft incidents. These included theft of email and other data by Appin from Pakistani and Chinese government officials. SentinelOne also found evidence of Appin carrying out defacement attacks on sites associated with the Sikh religious minority community in India and of at least one request to hack into a Gmail account belonging to a Sikh individual suspected of being a terrorist. "The current state of the organization significantly differs from its status a decade ago," says Tom Hegel, principal threat researcher at SentinelLabs. "The initial entity, 'Appin,' featured in our research, no longer exists but can be regarded as the progenitor from which several present-day hack-for-hire enterprises have emerged," he says. Factors such as rebranding, employee transitions, and the widespread dissemination of skills contribute to Appin being recognized as the pioneering hack-for-hire group in India, he says. 


Security Firm COO Hacked Hospitals to Drum Up Business

According to the plea agreement, Singla on Sept. 27, 2018, knowingly transmitted a command that resulted in an unauthorized modification to the configuration template for the ASCOM phone system at Gwinnett Medical Center's Duluth hospital campus. As a result, all of the Duluth hospital's ASCOM phones that were connected to the phone system during Singla's transmission were rendered inoperable, and more than 200 ASCOM handset devices were taken offline, the court document says. Those phones were used by Duluth hospital staff, including doctors and nurses, for internal communication, including for "code blue" emergencies. The ASCOM phones were used to place calls outside of the hospital, the court document says. On that same day, Singla - without authorization - obtained information including names, birthdates and the sex of more than 300 patients from a Hologic R2 Digitizer connected to a mammogram machine at Gwinnett's Lawrenceville hospital campus, the document says. The digitizer, which was accessible through Gwinnett's virtual private network, was protected by a password. 


How to Structure and Build a Team For Long-Term Success

Leaders have to be careful not to get caught in a situation where somebody could misconstrue their kindness or attention, but being in leadership doesn't have to mean sacrificing gaining friendships. Balance being too friendly with being able to offer necessary corrections. By nature, I tend to be a people pleaser, so I must work on being tougher — especially early in relationships. After my collegiate basketball career ended, I became a high school basketball referee. I found that the whole game went smoother if I was tough in the first quarter of a game. It is important to establish a sense of control when they first hire a new team member, and then they can infuse the second, third and fourth quarters with more friendship. Leaders can have situations that test the relationships they're working to build. Let's say someone has two people on their team, and they have to decide which one gets promoted. The one who didn't get promoted might feel like the leader let them down. Leaders must maintain enough professional distance so that an employee knows it was not due to favoritism in this situation.


Data is Everybody’s Business: The Fundamentals of Data Monetization

Companies get better at data monetization by practicing it. “Rather than wait for the right set of capabilities to magically appear,” Owens says, “businesses should start engaging in monetization activities. The learning and the returns come from doing, not from talking about doing. For starters, organizations could choose one process or product to improve or a single business challenge to solve with data.” Creating data assets also means creating organizational governance so that the right people use the data in the right ways. Data assets can be monetized only after data is properly cleaned, permissioned with the right security, and made accessible to authorized users. “If you aren’t purposely managing and monetizing your data, it won’t pay off,” says Wixom. A big problem with data is that everybody is starting from scratch all the time, says Wixom. “There isn’t enough attention to accumulating knowledge and skills for the future benefit of the organization. But if you create data assets and establish enterprise capabilities to manage them properly, data can be reused limitlessly for all kinds of value-creating reasons across an organization.”


Blockchain could save AI by cracking open the black box

Blockchain is finally being unchained from crypto, and many now see its potential as a foundation of support and validation for another emerging technology -- AI. Blockchain -- and other distributed ledger technologies -- could even help solve AI's black box problem "by providing a transparent, immutable ledger to monitor model training and trace decision-making processes," according to the authors of a new report. "This gives organizations the ability to audit the data and algorithms used, enabling greater security and trust in AI systems." ... "As AI operations go mainstream -- and as people raise concerns about the technology -- leaders are recognizing the need for a more responsible AI that prioritizes data security and transparency," the survey's authors point out. "Ensuring trustworthiness and reliability of their AI tools is a top priority for businesses, and blockchain is the turnkey solution for addressing the risks that come with AI implementation." Executives have developed a greater level of understanding of blockchain. Seventy-seven percent say they fully understand blockchain and can explain the value of it to their teams -- up five percentage points over last year's survey. 


FinOps Debuts Cloud Transparency Standards

Given that the project is backed by the largest players in the multi-billion dollar cloud market, several large enterprise-level users such as Goldman Sachs and Walmart, have also backed this initiative. “We are establishing FOCUS as the cornerstone lexicon of FinOps by providing an open source, vendor-agnostic specification featuring a unified schema and language,” says Mike Fuller CTO at the FinOps Foundation. “With this release, we are paving the way for FOCUS to foster collaboration among major cloud providers, FinOps vendors, leading SaaS providers and forward-thinking FinOps enterprises to establish a unified, serviceable framework for cloud billing data, increasing trust in the data and making it easier to understand the value of cloud spend,” Fuller said in a statement. As readers would know, cloud operators provide customers with billing data providing the costs of services they use, which also includes granular details around individual product costs, and discounts, if any. Businesses use this billing data from the service providers to track their spends, forecast future costs and build their SaaS budgets.



Quote for the day:

"Pursue one great decisive aim with force and determination." -- Carl Von Clause Witz

Daily Tech Digest - August 20, 2023

Central Bank Digital Currency (CBDC) and blockchain enable the future of payments

CBDC has the potential to transform the future of payments. It can be used to create programmable money that can be spent only on specific things. For example, a government could issue a stimulus package that can only be spent on certain goods and services. This would ensure that the money is spent in the intended manner and would reduce the risk of fraud. Also, CBDC can improve financial inclusion. According to the World Bank, around 1.7 billion people do not have access to basic financial services. CBDC can solve this problem by providing a digital currency that anyone with a smartphone can use, without the need for a bank account. When a CBDC holder uses their phone as a medium for transactions, it becomes crucial to establish a strong link between their digital identity and the device they are using. This link is essential to ensure that the right party is involved in the transaction, mitigating the risk of fraud and promoting trust in the digital financial ecosystem. That said, CBDC and the digital identity can work together to improve financial inclusion.


A statistical examination of utilization trends in decentralized applications

Decentralized applications (dApp) have proliferated in recent years, but their long-term viability is a topic of debate. However, for dApps to be sustainable, and suitable for integration into a larger service networks, they need to attract users and promise reliable availability. Therefore, assessing their longevity is crucial. Analyzing the utilization trajectory of a service is, however, challenging due to several factors, such as demand spikes, noise, autocorrelation, and non-stationarity. In this study, we employ robust statistical techniques to identify trends in currently popular dApps. Our findings demonstrate that a significant proportion of dApps, across a range of categories, exhibit statistically significant positive overall trends, indicating that success in decentralized computing can be sustainable and transcends specific fields. However, there is also a substantial number of dApps showing negative trends, with a disproportionately high number from the decentralized finance (DeFi) category. 


How SaaS Companies Can Monetize Generative AI

Rather than building these models from scratch, many companies elect to leverage OpenAI’s APIs to call GPT-4 (or other models), and serve the response back to customers. To obtain complete visibility into usage costs and margins, each API call to and from OpenAI tech should be metered to understand the size of the input and the corresponding backend costs, as well as the output, processing time and other relevant performance metrics. By metering both the customer-facing output and the corresponding backend actions, companies can create a real-time view into business KPIs like margin and costs, as well as technical KPIs like service performance and overall traffic. After creating the meters, deploy them to the solution or application where events are originating to begin tracking real-time usage. Once the metering infrastructure is deployed, begin visualizing usage and costs in real time as usage occurs and customers leverage the generative services. Identify power users and lagging accounts and empower customer-facing teams with contextual data to provide value at every touchpoint.


“Auth” Demystified: Authentication vs Authorization

There are two technical approaches to modern authorization that are growing ecosystems around them: policy-as-code and policy-as-data. They are similar in that both approaches advocate decoupling authorization logic from the application code. But they also have differences. In policy-as-code systems, the authorization policy is written in a domain-specific language, and stored and versioned in its own repository like any other code. OPA is one well-known example of this approach. It is a CNCF graduated project that is mostly used in infrastructure authorization use cases, such as k8s admission control. It provides a great general-purpose decision engine to enforce authorization logic, and a language called Rego to define that logic as policy. The policy-as-data approach determines access based on relationships between users and the underlying application data. Rather than rely on rules in a policy, these systems use the relationships between subjects (users/groups) and objects (resources) in the application. 


Redefining Software Resilience: The Era of Artificial Immune Systems

Artificial Immune Systems, inspired by the vertebrate immune system, provide an innovative approach to designing self-healing software. By emulating the biological immune system’s ability to adapt, learn, and remember, AIS can empower software systems to detect, diagnose, and fix issues autonomously. AIS offers a framework that enables the software to learn from each interaction, adapt to system changes, and remember past faults and their resolutions. AIS leads to a more robust, resilient system capable of tackling an array of unpredictable errors and vulnerabilities. The vertebrate immune system consists of innate immunity and adaptive immunity. Innate immunity protects us against known pathogens. Innate immunity is always non-specific and general. Present self-healing software models closely resemble innate immunity. Adaptive immunity can learn from current threats and apply the knowledge to handle future situations. At its core, these systems mimic the vertebrate immune system’s differentiation of self and non-self entities.


Europe’s Business Software Startups Prove Resilient: Why?

So what are the factors underpinning the resilience of Europe’s business software sector. One key element of the picture is demand from other tech companies. “Europe’s tech ecosystem is maturing, " says Windsor. “And as the sector matures, companies need tools. Those tools are being supplied by business software companies.” And of course, there is demand from companies outside the tech sector. From banking and financial services to manufacturing, digital transformation is continuing across the economy as a whole creating opportunities for new B2B software providers. But how do European companies take advantage of those opportunities in a market that has been dominated by North American rivals? This isn’t captured in the data, but Windsor sees a home market-first approach, widening out to include new countries and territories as businesses grow. “Anecdotally companies start by selling to their domestic market, then they look at the continent. After that, they expand to other regions.” There is, Windsor adds, a preference for the Asia Pacific. The U.S., on the other hand, remains a difficult market.


Open RAN Testing Expands in the US Amid 5G Slowdown

To be clear, open RAN technology in the US has a number of backers. Dish Network is perhaps the most vocal, having built an open RAN-based 5G network across 70% of the US population. Further, other operators have hinted at their own initial open RAN aspirations, including AT&T and Verizon. Interestingly, the US government has also emerged as a leading proponent for open RAN. For example, the US military continues to fund open RAN tests and deployments. And the Biden administration's NTIA is doling out millions of dollars in the pursuit of open RAN momentum. Broadly, US officials hope to use open RAN technologies to encourage the production of 5G equipment domestically and among US allies, as a lever against China. But open RAN continues to face struggles. For example, US-based open RAN vendors like Airspan and Parallel Wireless have hit hurdles recently. And research and consulting firm Dell'Oro recently reported that open RAN revenue growth slowed to the 10 to 20% range in the first quarter, after more than doubling in 2022.


Low-Code and AI: Friends or Foes?

Although it appears likely that AI will replace low-code, there are actually many opportunities for symbiosis between the two concepts. Rather than eradicate low-code platforms entirely, LLMs will likely become more embedded within them. We’ve already seen this occur as low-code providers like Mendix and OutSystems integrated ChatGPT connectors. Microsoft has also embedded ChatGPT into its Power Platform as well as integrated GPT-driven Copilots into various developer environments. “Low-code and AI on their own are powerful tools to increase enterprise efficiency and productivity,” said Dinesh Varadharajan, the chief product officer at Kissflow. “But there is potential for the combination of both to unlock game-changing automation for almost every industry. The power comes from the congruence between low-code/no-code and AI.” There is also the opportunity to train bespoke LLMs on the inner workings of specific software development platforms, which could generate fully-built templates upon natural language prompts. 

Cloud cost optimization should begin by measuring the drivers of cloud spend at a granular level and then providing full visibility to the teams and organizations that are behind the spend, says Tim Potter, principal, technology strategy and cloud engineering with Deloitte Consulting. “Near-real-time dashboards showing cloud resource utilization, routine reports of cloud consumption, and predictive spend reports will provide application teams and business units with the data needed to take action to optimize cloud costs,” he notes. ... Rearchitecting applications is a frequently overlooked way to achieve the cost and other benefits of transitioning to a cloud model. “Organizations also need to understand the various discount models and select one that optimizes costs yet also provides flexibility and predictability into spending,” says Mindy Cancila, vice president of corporate strategy for Dell Technologies. Cancila adds that organizations should not only consider current workload costs, but also how to manage costs for workloads as they scale over time.


Warning: Attackers Abusing Legitimate Internet Services

Cloud storage platforms, and Google Cloud in particular, are the most exploited, followed by messaging services - most often Telegram, including via its API - as well as email services and social media, the researchers found. Examples of other services being abused by attackers include OneDrive, Discord, Gmail SMTP, Mastodon profiles, GitHub, bitcoin blockchain data, the project management tool Notion, malware analysis site VirusTotal, YouTube comments and even Rotten Tomatoes movie review site profiles. "It is important to note that ransomware campaigns use legitimate cloud storage tools such as mega.io or MegaSync for exfiltration purposes as well," although the crypto-locking malware itself may not be coded to work directly with legitimate tools, the report says. Criminals' choice of service depends on desired functionality. Anyone using an info stealer such as Vidar needs a place to store large amounts of exfiltrated data. The researchers said cloud services' easy setup for less technically sophisticated users makes them a natural fit for such use cases.



Quote for the day:

"We're all passionate about something, the secret is to figure out what it is, then pursue it with all our hearts" -- Gordon Tredgold