Daily Tech Digest - May 18, 2024

AI imperatives for modern talent acquisition

In talent acquisition, the journey ahead promises to be tougher than ever. Recruiters face a paradigm shift, moving beyond traditional notions of filling vacancies to addressing broader business challenges. The days of simply sourcing candidates are long gone; today's TA professionals must navigate complexities ranging from upskilling and reskilling to mobility and contracting. ... At the heart of it lies a structural shift reshaping the global workforce. Demographic trends, such as declining birth rates, paint a sobering picture of a world where there simply aren't enough people to fill available roles. This demographic drought isn't limited to a single region; it's a global phenomenon with far-reaching implications. Compounding this challenge is the changing nature of careers. No longer tethered to a single company, employees are increasingly empowered to seek out opportunities that align with their aspirations and values. This has profound implications for talent retention and development, necessitating a shift towards systemic HR strategies that prioritise upskilling, mobility, and employee experience.


Ineffective scaled agile: How to ensure agile delivers in complex systems

When developing a complex system it’s impossible to uncover every challenge even with the most in-depth upfront analysis. One way of dealing with this is by implementing governance that emphasizes incorporating customer feedback, active leadership engagement and responding to changes and learnings. Another challenge can arise when teams begin to embrace working autonomously. They start implementing local optimizations which can lead to inefficiencies. The key is that the governance approach should make sure that the overall work is broken down into value increments per domain and then broken down further into value increments per team in regular time intervals. This creates a shared sense of purpose across teams and guides them towards the same goal. Progress can then be tracked using the working system as the primary measure of progress. Those responsible for steering the overall program need to facilitate feedback and prioritization discussions, and should encourage the leadership to adapt to internal insights or changes in the external environment.


How to navigate your way to stronger cyber resilience

If an organization doesn’t have a plan for what to do if a security incident takes place, they risk finding themselves in the precarious position of not knowing how to react to events, and consequently doing nothing or the wrong thing. The report also shows that just over a third of the smaller companies worry that senior management doesn’t see cyberattacks as a significant risk. How can they get greater buy-in from their management team on the importance of cyber risks? It’s important to understand that this is not a question of management failure. It is hard for business leaders to engage with or care about something they don’t fully understand. The onus is on security professionals to speak in a language that business leaders understand. They need to be storytellers and be able to explain how to protect brand reputation through proactive, multi-faceted defense programs. Every business leader understands the concept of risk. If in doubt, present cybersecurity threats, challenges, and opportunities in terms of how they relate to business risk.


DDoS attacks: Definition, examples, and techniques

DDoS botnets are the core of any DDoS attack. A botnet consists of hundreds or thousands of machines, called zombies or bots, that a malicious hacker has gained control over. The attackers will harvest these systems by identifying vulnerable systems that they can infect with malware through phishing attacks, malvertising attacks, and other mass infection techniques. The infected machines can range from ordinary home or office PCs to DDoS devices—the Mirai botnet famously marshalled an army of hacked CCTV cameras—and their owners almost certainly don’t know they’ve been compromised, as they continue to function normally in most respects. The infected machines await a remote command from a so-called command-and-control server, which serves as a command center for the attack and is often itself a hacked machine. Once unleashed, the bots all attempt to access some resource or service that the victim makes available online. Individually, the requests and network traffic directed by each bot towards the victim would be harmless and normal. 


7 ways to use AI in IT disaster recovery

The integration of AI into IT disaster recovery is not just a trendy addition; it's a significant enhancement that can lead to quicker response times, reduced downtime and overall improved business continuity. By proactively identifying risks, optimizing resources and continuously learning from past incidents, AI offers a forward-thinking approach to disaster recovery that could be the difference between a minor IT hiccup and a significant business disruption. ... A significant portion of IT disasters are due to cyberthreats. AI and machine learning can help mitigate these issues by continuously monitoring network traffic, identifying potential threats and taking immediate action to mitigate risks. Most new cybersecurity businesses are using AI to learn about emerging threats. They also use AI to look at system anomalies and block questionable activity. ... AI can optimize the use of available resources, ensuring that critical functions receive the necessary resources first. This optimization can greatly increase the efficiency of the recovery process and help organizations working with limited resources.


Underwater datacenters could sink to sound wave sabotage

In a paper available on the arXiv open-access repository, the researchers detail how sound at a resonant frequency of the hard disk drives (HDDs) deployed in submerged enclosures can cause throughput reduction and even application crashing. HDDs are still widely used in datacenters, despite their obituary having been written many times, and are typically paired with flash-based SSDs. The researchers focused on hybrid and full-HDD architectures to evaluate the impact of acoustic attacks. The researchers found that sound at the right resonance frequency would induce vibrations in the read-write head and platter of the disks by vibration propagation, proportional to the acoustic pressure, or intensity of the sound. This affects the disk's read/write performance. For the tests, a Supermicro rack server configured with a RAID 5 storage array was placed inside a metal enclosure in two scenarios; an indoor laboratory water tank and an open-water testing facility, which was actually a lake on the Florida University campus. Sound was generated from an underwater speaker.


Agile Design, Lasting Impact: Building Data Centers for the AI Era

While there is a clear need for more data centers, the development timeline of building new, modern data centers incorporating these technologies and regulatory adaptations is currently between three to five years (more in some cases). And not just that, the fast pace at which technology is evolving means manufacturers are likely to face the need to rethink strategy and innovation mid-build to accommodate further advancements. ... This is a pivotal moment for our industry and what’s built today could influence what’s possible tomorrow. We’ve had successful adaptations before, but due to the current pace of evolution, future builds need to be able to accommodate retrofits to ensure they remain fit for purpose. It's crucial to strike a balance between meeting demand, adhering to regulations, and designing for adaptability and durability to stay ahead. We might see a rise in smaller, colocation data centers offering flexibility, reduced latency, and cost savings. At the same time, medium players could evolve into hyperscalers, with the right vision to build something suitable to exist in the next hype cycle.


Quantum internet inches closer: Qubits sent 22 miles via fiber optic cable

Even as the biggest names in the tech industry race to build fault-tolerant quantum computers, the transition from binary to quantum can only be completed with a reliable internet connection to transmit the data. Unlike binary bits transported as light signals inside a fiber optic cable that can be read, amplified, and transmitted over long distances, quantum bits (qubits) are fragile, and even attempting to read them changes their state. ... Researchers in the Netherlands, China, and the US separately demonstrated how qubits could be stored in “quantum memory” and transmitted over the fiber optic network. Ronald Hanson and his team at the Delft University of Technology in the Netherlands encoded qubits in the electrons of nitrogen atoms and nuclear states of carbon atoms of the small diamond crystals that housed them. An optical fiber cable traveled 25 miles from the university to another laboratory in Hague to establish a link with similarly embedded nitrogen atoms in diamond crystals.


Cyber resilience: Safeguarding your enterprise in a rapidly changing world

In an era defined by pervasive digital connectivity and ever-evolving threats, cyber resilience has become a crucial pillar of survival and success for modern-day enterprises. It represents an organisation’s capacity to not just withstand and recover from cyberattacks but also to adapt, learn, and thrive in the face of relentless and unpredictable digital challenges. ... Due to the crippling effects a cyberattack can have on a nation, governments and regulatory bodies are also working to develop guidelines and standards which encourage organisations to embrace cyber resilience. For instance, the European Parliament recently passed the European Cyber Resilience Act (CRA), a legal framework to describe the cybersecurity requirements for hardware and software products placed on the European market. It aims to ensure manufacturers take security seriously throughout a product’s lifecycle. In other regions, such as India, where cybersecurity adoption is comparatively evolving, the onus falls on industry leaders to work with governmental bodies and other enterprises to encourage the development and adoption of similar obligations. 


How to Build Large Scale Cyber-Physical Systems

There are several challenges in building hardware-reliant cyber-physical systems, such as hardware lead times, organisational structure, common language, system decomposition, cross-team communication, alignment, and culture. People engaged in the development of large-scale safety-critical systems need line of sight to business objectives, Yeman said. Each team should be able to connect their daily work to those objectives. Yeman suggested communicating the objectives through the intent and goals of the system as opposed to specific tasks. An example of an intent-based system objective would be to ensure the system can communicate to military platforms securely as opposed to specifically defining that the system must communicate via link-16, she added. Yeman advised breaking the system problem down into smaller solvable problems. With each of those problems resolve what is known first and then resolve the unknown through a series of experiments, she said. This approach allows you to iteratively and incrementally build a continuously validated solution.



Quote for the day:

"Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - May 17, 2024

Cloud Computing at the Edge: From Evolution to Disruption

While hybrid cloud solutions provide a cloud experience from an operational point of view, they are not supporting the flexible consumption-based pricing model of the cloud. Organizations must purchase or lease IT resources for on-premises deployment up front from the cloud provider rather than on demand. And while they can scale up, they can’t scale down and reduce cost if their usage is reduced. Moreover, the fact that the local extensions can only communicate with the centralized cloud and can’t communicate among them is a major limitation to the scalability of this model. ... Scalable multicloud architectures offer a robust solution to address IT services at the edge of the network. They provide a comprehensive cloud experience at multiple locations. Proximity to users enhances performance, particularly for localized services and applications, by reducing latency and improving responsiveness. Interconnected clouds facilitate seamless data exchange and collaboration, supporting innovation and agility within organizations. This approach enables data sovereignty and mitigates the risk of downtime and data loss by providing redundancy and resilience across multiple clouds. 


Colorado Enacts BIPA-Like Regulatory Obligations (and More)

HB 1130 applies to “biometric identifiers” and “biometric data.” A biometric identifier is defined as “data generated by the technological processing, measurement, or analysis of a consumer’s biological, physical, or behavioral characteristics, which can be processed for the purpose of uniquely identifying an individual.” Biometric data is defined as “one or more biometric identifiers that are used or intended to be used, singly or in combination with each other or with any other personal data, for identification purposes.” Together, the scope of covered data under HB 1130 is much broader as compared to BIPA, Texas’s Capture or Use of Biometric Identifiers Act (CUBI), and similar biometrics laws currently in effect. This aspect of HB 1130 not only increases the extent of legal risk and liability exposure that companies will face but will also create significant complexities and challenges in ascertaining whether organizational biometric data processing activities fall under the scope of HB 1130. Importantly, the combination of HB 1130’s broad applicability and its expansive definitions of biometric identifiers/data will subject controllers to compliance even where only an amount of biometric data is processed, and no actual biometric identification or authentication is performed.


Adaptive Data Governance: What, Why, How

Adaptive Data Governance has a framework that balances responses to changing business conditions and meets the requirements for privacy and control. Keys to this structure lie in the data culture and alignment, as described in the “Key Components for an Adaptive DG Framework.” To start, define agile Governance principles that work best with the business culture. Getting this right can prove challenging, because businesspeople may fear losing control of data accessibility, having a diminished data role, or because they find data decision-making challenging. It helps to start with a data maturity model, to understand how well staff values data, find the gaps, and determine the next steps. From there, establish accountability rights through clear roles and responsibilities. The decision-making processes and resources need to be well-defined, especially what to do around time-sensitive and critical issues, in what general contexts, and how and when to escalate them. It helps to include a multiple combination of governance styles that can be applied as needed to the governance situation at hand and can respond to change. DATAVERSITY’s DG definition describes the different governance types.


Distributed Systems: Common Pitfalls and Complexity

Concurrency represents one of the most intricate challenges in distributed systems. Concurrency implies the simultaneous occurrence of multiple computations. Consequently, what occurs when an attempt is made to update the account balance simultaneously from disparate operations? In the absence of a defensive mechanism, it is highly probable that race conditions will ensue, which will inevitably result in the loss of writes and data inconsistency. In this example, two operations are attempting to update the account balance concurrently. Since they are running in parallel, the last one to complete wins, resulting in a significant issue. ... The CAP Theorem posits that any distributed data store can only satisfy two of the three guarantees. However, since network unreliability is not a factor that can be significantly influenced, in the case of network partitions, the only viable option is to choose between availability or consistency. Consider the scenario in which two clients read from different nodes: one from the primary node and another from the follower. A replication is configured to update followers after the leader has been changed. However, what happens if, for some reason, the leader stops responding?


Navigating Three Decades of the Cloud

Today’s organizations have recognized the importance of a strategic, scalable, and incremental approach to their cloud migration efforts. While a 'big-bang' approach may seem attractive, successful organizations are opting for a more phased and purpose-driven approach to enterprise-scale cloud migrations. Moving to the cloud isn't as simple as flipping a switch. Well-thought-out strategic planning, coupled with a clear execution roadmap, is critical to success. Now that technology underpins nearly every aspect of the modern enterprise, it's critical to understand the impacts and implications of modernization across operations, management, finance, IT, and beyond. ... Although the cloud offers unparalleled flexibility and scalability, the specter of rising costs prompts many enterprises to reassess their cloud strategies. As the financial implications of cloud usage become more apparent, organizations find themselves at a crossroads, carefully weighing the benefits against the expenses and reevaluating which workloads to retain on-premises or migrate to private cloud environments.


Are Banks Suffering From ‘Innovation Fatigue’ at the Worst Possible Moment?

The report underscores the importance of aligning performance measurement with strategic objectives. While the metrics provided offer valuable insights into industry benchmarks, relying solely on the data without the context of a well-defined strategy can lead to misguided decisions. To strike the right balance, the report recommends that financial institutions develop a comprehensive digital banking metrics framework. This framework should encompass a range of metrics, including investments, adoption, usage, efficiency, and output, ensuring a holistic understanding of digital banking performance and enabling data-driven decision-making. In conclusion, the 2024 Digital Banking Performance Metrics report serves as a wake-up call for the industry. While financial institutions have made significant investments in digital banking capabilities, the strategic impact of these investments remains uncertain. To navigate the evolving digital landscape successfully, institutions must embrace emerging technologies like AI, reignite their innovation drive, and establish robust performance measurement frameworks aligned with their strategic objectives.


How Technical Debt Can Impact Innovation and How to Fix It

Rafalin said enterprises are facing what he refers to as boiling frog syndrome when it comes to technical debt. "Everyone knows it's an issue, and the clock is ticking, but organizations continue to prioritize releasing new features over maintaining a solid architecture," he said. "With the rise of AI, developers are becoming more and more productive, but this also means they will generate more technical debt. It's inevitable." In Rafalin's view, addressing technical debt requires a strategic vision. While quick patches may save companies in the short term, eventually technical debt will manifest in more outages and vulnerabilities. Technical debt needs to be addressed constantly and proactively as part of the software development life cycle, he said. For organizations just trying to get a quick handle on technical debt, where do they start and what should they do? According to Rafalin, the reality is technical debt that's been accumulating for a long time has no quick fix, especially architectural technical debt. There is no single line of code fix or framework upgrade that solves these architectural issues.


The automation paradox: Identifying when your system is holding you back

A company implementing an automation solution with the promise of significant cost savings sees minimal improvement in their bottom line after months of use. This points towards an automation solution that fails to deliver a significant return on investment (ROI). Basic automation solutions often fall short of their promises because they focus on isolated tasks without considering the bigger picture. Advanced automation solutions with features like intelligent process mining and CPA go beyond basic data extraction and task automation. These features unlock significant ROI potential by identifying inefficiencies in existing workflows and automating tasks that deliver the greatest impact. Beyond just saving Full-Time Equivalent (FTE) costs, cognitive automation provides additional benefits to organizations. ... Effective automation is not a one-time fix; it’s a continuous journey. By recognizing the signs of a plateauing automation strategy and seeking out next-generation solutions, enterprises can break free from the automation paradox. The future belongs to a collaborative approach where humans and intelligent automation work in tandem.


Should You Buy Cyber Insurance in 2024? Pros & Cons

One of the primary challenges of cyber insurance is the rapidly changing nature of cyber threats. As hackers become more sophisticated and new attack vectors emerge, it becomes challenging for insurers to assess and quantify the potential risks accurately. This can lead to coverage gaps and inadequate protection for businesses, as policies may not adequately address emerging cyber threats. Another limitation of cyber insurance is the lack of standardization across policies and coverage options. Each insurer may offer different terms, conditions, and exclusions, making it difficult for businesses to compare policies and make informed decisions. ... Cyber insurance policies typically focus on financial losses resulting from cyber incidents, such as business interruption, data restoration costs, and legal expenses. However, non-monetary losses like reputational damage, loss of customer trust, and diminished brand value may not always be adequately covered. These intangible losses can have far-reaching consequences for businesses, and their limited coverage can expose them to significant risks.


The UK’s digital identity crisis

The impact of Aadhaar in India cannot be underestimated; as part of a broader digital infrastructure, it arguably makes India a global leader in digital identity. ... In stark contrast, the UK amassed a paltry 8.6 million users for its GOV.UK Verify scheme before it was shut down in 2023 due to a variety of issues. Its replacement, GOV.UK One Login, has yet to be integrated across all government services, which will be key to adoption. It is fair to say that the UK currently has one of the lowest rates of digital identity adoption globally. As a country, there are a number of reasons why this matters:Missed economic opportunities: Digital identities can streamline business operations, reduce fraud, and enhance customer experiences, driving economic growth. Slow adoption means the UK may lag behind in this area. Inefficiencies in public services: Effective digital identity systems can significantly reduce bureaucratic inefficiencies, saving time and resources for both citizens and the government. The UK’s slower adoption hampers these potential efficiencies. Lag in innovation: Countries leading in digital identity are often at the forefront of broader digital innovation.



Quote for the day:

"You've got to get up every morning with determination if you're going to go to bed with satisfaction. " -- George Lorimer

Daily Tech Digest - May 16, 2024

Cultivating cognitive liberty in the age of generative AI

Cognitive liberty is a pivotal component of human flourishing that has been overlooked by traditional theories of liberty—primarily because we have taken for granted that our brains and mental experiences are under our own control. This assumption is being replaced with more nuanced understandings of the human brain and its interaction with our environment, our interactions with others, and our interdependence with technology. Cultivating cognitive liberty in the digital age will become increasingly vital to enable humans to exercise individual agency, nurture human creativity, discern fact and fiction, and reclaim our critical thinking skills amid unprecedented cognitive opportunities and risks. Generative AI tools like GPT-4 pose new challenges to cognitive liberty, including the potential to interfere with and manipulate our mental experiences. They can exacerbate biases and distortions that undermine the integrity and reliability of the information we consume, in turn influencing our beliefs, judgments, and decisions. 


Smart homes, smart choices: How innovation is redefining home furnishing

Most notably, the advent of innovations has made shopping for furniture online a far more enjoyable experience. It begins with options. Today, online furniture websites provide customers with a vastly larger catalog of choices than a brick-and-mortar school could imagine since there are no physical constraints in the digital realm. But vast selections alone are just the beginning. That’s why innovations like AR and VR are so important. Once shoppers identify potential items, AR and VR allow them to view each piece online. They can examine not just static images but pictures from all sides and angles. They can personalize it to fit their style and home. ... First, they understand various key factors, including the origin of the materials being used, how they were made, the labor practices involved, potential environmental impacts, and more. For Wayfair, we are leading the way by including sustainability certifications on approved items as part of our Shop Sustainably commitment. This shift is part of a larger movement called conscious consumerism, where purchasing decisions are made based on those that have positive social, economic, and environmental impacts. 


A Guide to Model Composition

At its core, model composition is a strategy in machine learning that combines multiple models to solve a complex problem that cannot be easily addressed by a single model. This approach leverages the strengths of each individual model, providing more nuanced analyses and improved accuracy. Model composition can be seen as assembling a team of experts, where each member brings specialized knowledge and skills to the table, working together to achieve a common goal. Many real-world problems are too complicated for a one-size-fits-all model. By orchestrating multiple models, each trained to handle specific aspects of a problem or data type, we can create a more comprehensive and effective solution. There are several ways to implement model composition, including but not limited to: Sequential processing: Models are arranged in a pipeline, where the output of one model serves as the input for the next. ... Parallel processing: Multiple models run in parallel, each processing the same input independently. Their outputs are then combined, either by averaging, voting or through a more complex aggregation model, to produce a final result. 


Securing IoT devices is a challenging yet crucial task for CIOs: Silicon Labs CEO

Likewise, as IoT deployments expand, we’ll need scalable infrastructure and solutions capable of accommodating growing device numbers and data volumes. Many countries have their own nuanced regulatory compliance schemes, which add another layer of complexity, especially for data privacy and security regulations. Notably, in India, cost considerations, including initial deployment costs and ongoing maintenance expenses, can be a barrier to adoption, necessitating an understanding of return on investment. ... Silicon Labs has played a key role in advancing IoT and AI adoption through collaborations with industry and academia, including a recent partnership with IIIT-H in India. In 2022, we launched India's first campus-wide Wi-SUN network at the IIIT-H Smart City Living Lab, enabling remote monitoring and control of campus street lamps. This network provides students and researchers with hands-on experience in developing smart city solutions. Silicon Labs also supports STEM education initiatives like Code2College to inspire innovation in the IoT and AI fields.


Cyber resilience: A business imperative CISOs must get right

Often, organizations have more capabilities than they realize, but these resources can be scattered throughout different departments. And each group responsible for establishing cyber resilience might lack full visibility into the existing capabilities within the organization. “Network and security operations have an incredible wealth of intelligence that others would benefit from,” Daniels says. Many companies are integrating cyber resilience into their enterprise risk management processes. They have started taking proactive measures to identify vulnerabilities, assess risks, and implement appropriate controls. “This includes exposure assessment, regular validation such as penetration testing, and continuous monitoring to detect and respond to threats in real-time,” says Angela Zhao, director analyst at Gartner. ... The rise of generative AI as a tool for hackers further complicates organization’s resilience strategies. That’s because generative AI equips even low-skilled individuals with the means to execute complex cyber attacks. As a result, the frequency and severity of attacks might increase, forcing businesses to up their game. 


Is an open-source AI vulnerability next?

The challenges within the AI supply chain mirror those of the broader software supply chain, with added complexity when integrating large language models (LLMs) or machine learning (ML) models into organizational frameworks. For instance, consider a scenario where a financial institution seeks to leverage AI models for loan risk assessment. This application demands meticulous scrutiny of the AI model’s software supply chain and training data origins to ensure compliance with regulatory standards, such as prohibiting protected categories in loan approval processes. To illustrate, let’s examine how a bank integrates AI models into its loan risk assessment procedures. Regulations mandate strict adherence to loan approval criteria, forbidding the use of race, sex, national origin, and other demographics as determining factors. Thus, the bank must consider and assess the AI model’s software and training data supply chain to prevent biases that could lead to legal or regulatory complications. This issue extends beyond individual organizations. The broader AI technology ecosystem faces concerning trends. 


Google’s call-scanning AI could dial up censorship by default, privacy experts warn

Google’s demo of the call scam-detection feature, which the tech giant said would be built into a future version of its Android OS — estimated to run on some three-quarters of the world’s smartphones — is powered by Gemini Nano, the smallest of its current generation of AI models meant to run entirely on-device. This is essentially client-side scanning: A nascent technology that’s generated huge controversy in recent years in relation to efforts to detect child sexual abuse material (CSAM) or even grooming activity on messaging platforms. ... Cryptography expert Matthew Green, a professor at Johns Hopkins, also took to X to raise the alarm. “In the future, AI models will run inference on your texts and voice calls to detect and report illicit behavior,” he warned. “To get your data to pass through service providers, you’ll need to attach a zero-knowledge proof that scanning was conducted. This will block open clients.” Green suggested this dystopian future of censorship by default is only a few years out from being technically possible. “We’re a little ways from this tech being quite efficient enough to realize, but only a few years. A decade at most,” he suggested.


Data strategy? What data strategy?

A recent survey of UKI SAP users found that only 12 percent of respondents had a data strategy that covers their entire organization - these are people who are very likely to be embarking on tricky migrations to S/4HANA. Without properly understanding and governing the data they’re migrating, they’re en route to some serious difficulties. That’s because, more often than not, when a digital transformation project is on the cards, data takes a back seat. In the flurry of deadlines, testing, and troubleshooting, it feels so much more important to get the infrastructure in place and deal with the data later. The single goal is switching on the new system. Fixing the data flaws that caused so many headaches with the old solution is rarely top of the list. But those flaws and headaches are telling you something: your data needs serious attention. Unless you take action, those data silos that slow down decision-making and the data management challenges that are a blocker to innovation will follow you to your new infrastructure.


Designing and developing APIs with TypeSpec

TypeSpec is in wide use inside Microsoft, having spread from its original home in the Azure SDK team to the Microsoft Graph team, among others. Having two of Microsoft’s largest and most important API teams using TypeSpec is a good sign for the rest of us, as it both shows confidence in the toolkit and ensures that the underlying open-source project has an active development community. Certainly, the open-source project, hosted on GitHub, is very active. It recently released TypeSpec 0.56 and has received over 2000 commits. Most of the code is written in TypeScript and compiled to JavaScript so it runs on most development systems. TypeSpec is cross-platform and will run anywhere Node.js runs. Installation is done via npm. While you can use any programmer’s editor to write TypeSpec code, the team recommends using Visual Studio Code, as a TypeSpec extension for VS Code provides a language server and support for common environment variables. This behaves like most VS Code language extensions, giving you diagnostic tools, syntax highlights, and IntelliSense code completion. 


What’s holding CTOs back?

“Obviously, technology strategy and business strategy have to be ultimately driven by the vision of the organization,’’ Jones says, “but it was surprising that over a third of CTOs we surveyed felt they weren’t getting clear vision and guidance.” The CTO role also means different things in different organizations. “The CTO role is so diverse and spans everything from a CTO who works for the CIO and is making the organization more efficient, all the way to creating visibility for the future and transformations,’’ Jones says. ... Plexus Worldwide’s McIntosh says internal politics and some level of bureaucracy are unavoidable for CTOs seeking to push forward technology initiatives. “Navigating and managing this within an organization requires a balance of experience and influence to lessen any potential negative impact,’’ he says. Experienced leaders who have been with a company a long time “are often skilled at understanding the intricate web of relationships, power dynamics, and competing interests that shape internal politics and bureaucratic hurdles,’’ McIntosh says. 



Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer

Daily Tech Digest - May 15, 2024

Why Capability-Based IT Investments Planning Doesn’t Work for Enterprises Today

Capability-based Planning has been around for long in the world of Enterprise Architecture (EA), and often finds a mention in leading EA frameworks. At its core is the concept of “business capability” (or simply “capability”) which represents the “what” that the business does. This is different from the “how” of the business, which is represented by constructs such as business processes, value streams, and value chains. ... Capability-based IT planning approaches are typically linear and spread over years. They do not consider the real and dynamic nature of the enterprises of today, wherein new themes such as Product Management, Agile enterprise, and AI-led business disruption require continuous introspection and adaptation to evolving industry practices and customer preferences. ... The product roadmap provides prioritised inputs for the landscape to respond to. The good thing here is that such roadmaps typically have clarity up to a few quarters ahead (up to 1- 2 years generally), with the initial quarters being more concrete and stable as opposed to the later quarters. When combined with EA-driven landscape impact analysis, the resulting IT initiatives are much more aligned to the dynamics of the business.


Evolving Roles: Developers and AI in Coding

The increasing use of AI in software development is causing a paradigm shift in the jobs of developers. Developers are evolving from being merely code writers to orchestrators of technology, strategists, and leaders of innovation. This calls for adjusting to new roles that prioritize higher-level decision-making, problem characterization, and system design. One of the changes involves that the developers need to be skilled in incorporating and tailoring AI tools into their workflow. This entails knowing the possibilities and limitations of these instruments in addition to being able to use them. Developers can devote their time to more complicated and valuable operations by becoming proficient with these technologies and freeing up time from repetitive jobs. As AI assumes greater responsibility for the technical coding process, soft skills like project management, communication, and creative problem-solving become more crucial. Developers need to be multidisciplinary collaborators, proficient communicators with non-technical team members, and project managers of both people and technology.


Why is embedded insurance so popular right now?

“Consumers get good value with embedded insurance for two main reasons. The first is trust. Customers want to buy insurance products from their trusted brands, not financial services and insurance organisations. Through embedded solutions, customers can stick to shopping with and purchasing from the brands they love and trust. There is also no need to head to a physical outlet to buy insurance – customers get protection at the exact point of sale and the service or product will be covered instantly. There is a lot of value in this ease and simplicity. Embedded solutions do a lot of the hard work and it means safeguarding what you care about is no more complicated than ticking a box on purchase. The second reason is data. Embedded insurance utilises customer data to provide bespoke costs and policies. Thanks to technology such as open banking APIs (which facilitate the data transfer between entities), tech players can assess the preferences of users, their needs and financial behaviour. Embedded insurance platforms can therefore make informed decisions and provide diverse and tailored offerings to consumers based on their risk profiles. 


Understanding the Modern Data Stack

The architecture of a modern data stack is meticulously designed to ensure utmost flexibility and seamless integration, thereby revolutionizing the workflow for businesses. The hallmark of such an advanced system lies in its ability to adapt to the evolving demands of data processing and analysis. This flexibility is not just limited to handling diverse data types but also extends to its capability to integrate with a myriad of tools and platforms. Integration plays a pivotal role in enhancing this ecosystem, acting as the glue that binds all components of the data stack together. It ensures that data flows smoothly from one process to another without bottlenecks, enabling real-time analytics and insights. This interconnectedness allows for a holistic view of operations, making it easier for businesses to make informed decisions quickly. ... Ensuring Data Quality and security while maintaining cross-platform compatibility forms a cornerstone of the modern data stack. This holistic approach integrates various components, from databases and analytics tools to data integration and visualization platforms, ensuring seamless interoperability across different environments. 


Private cloud makes its comeback, thanks to AI

Private cloud providers may be among the key beneficiaries of today’s generative AI gold rush as, once seemingly passé in favor of public cloud, CIOs are giving private clouds — either on-premises or hosted by a partner — a second look. At the center of this shift is increasing acknowledgement that to support AI workloads and to contain costs, enterprises long-term will land on a hybrid mix of public and private cloud. ... Todd Scott, senior vice president for Kyndryl US, acknowledges that AI and cost are among the key factors driving enterprises toward private clouds. “Most enterprises are currently exploring AI on the public cloud, but we expect clients will ultimately bring the app to their data and run AI where the data is, in private environments and at the edge,” he says. “Another factor that’s driving a move back to private cloud is predictability of cost,” Scott says. “Agile enterprises, by definition, make frequent changes to their applications, so they sometimes see big fluctuations in the cost of having their data on public clouds. Private clouds provide more predictability because the infrastructure is dedicated.”


CISOs Reconsider Their Roles in Response to GenAI Integration

The rise of AI and generative AI tools is a double-edged sword. “On one hand, it’s increasing their organizations’ threat exposure because cybercriminals can now use generative AI tools to rapidly scale their attacks,” said Mike Britton, CISO of Abnormal Security. “On the other hand, CISOs also have a valuable opportunity to leverage AI in strengthening their defenses.” GenAI can help enhance security content creation, security testing and analytics, incident response, and forensics. AI and machine learning can play a role in that, Britton pointed out, by ingesting signals from across the email and SaaS environment and deeply understanding normal behavior across this ecosystem. “AI models can then be used to detect anomalous activity and understand when a message or an event may be malicious,” Britton said. “This can help security teams detect more attacks at a faster speed, ensuring that threat actors never successfully reach their targets.” Jose Seara, CEO and founder of DeNexus, pointed out that modern cybersecurity solutions are already AI-enabled and take advantage of AI’s data processing power to make sense of a large volume of cybersecurity signals. 


How Adobe manages AI ethics concerns while fostering creativity

At Adobe, ethical innovation is our commitment to developing AI technologies in a responsible way that respects our customers and communities and aligns with our values. Back in 2019, we established a set of AI Ethics Principles we hold ourselves to when we're innovating, including accountability, responsibility, and transparency. With the development of Firefly, our focus has been on leveraging these principles to help mitigate biases, respond to issues quickly, and incorporate customer feedback. Our ongoing efforts help ensure that we are implementing Firefly responsibly without slowing down innovation. ... Even before Adobe began work on Firefly, our Ethical Innovation team had leveraged our AI Ethics Principles to create a standardized review process for our AI products and features -- from design to development to deployment. For any product development at Adobe, my team first works with the product team to assess potential risks, evaluate mitigations, and demonstrate how our AI Ethics Principles are being applied. It is not done in isolation.


Why Tokens Are Like Gold for Opportunistic Threat Actors

Once a threat actor has a token, they also have whatever rights and authorizations are imbued to the user. If they have captured an IdP token, they can access all corporate applications' SSO capabilities integrated with the IdP — without an MFA challenge. If it is an admin-level credential with associated privileges, they can potentially wage a world of devastation against systems, data, and backups. The longer the token is active, the more they can access, steal, and damage. Further, they can then create new accounts that no longer require the use of the token for ongoing network access. While expiring session tokens more frequently will not stop these sorts of attacks, it will greatly minimize the risk footprint by shortening the window of opportunity for a token to function. Unfortunately, we often see that tokens are not being expired at regular intervals, and some breach reporting also suggests that default token expirations are being deliberately extended. ... Longer token expiries provide user convenience — but at a high security price.


Low-tech tactics still top the IT security risk chart

Low-tech attack vectors are being adapted by cyber criminals to overcome security defenses because they can often evade detection until it’s too late. ... Hyatt’s team recently identified a rogue USB drive used to install the Raspberry Robin malware, which acts as a launchpad for subsequent attacks and gives bad actors the ability to fulfil the three key elements of a successful attack — establish a presence, maintain access and enable lateral movement. ... Even commonplace tasks such as generating a QR code to configure the Microsoft Authenticator app that’s used for two-factor authentication with Office 365 is open to exploitation, because it normalizes QR codes as a secure mechanism in the minds of users, Heiland says. “People have been trained not to click on links, but not when it comes to using QR codes for authentication,” Helland tells CSO. The danger with a QR code is that it can be configured to launch almost any application on a device, download a file, or open a browser and go to a website, all without the user being aware of what it’s going to do.


Cyber Insurers Pledge to Help Reduce Ransom Payments

As ransomware continues to pummel Britain, the government's cybersecurity agency and three major insurance associations have pledged to offer better support and guidance to victims. ... "Ransomware continues to be the biggest day-to-day cybersecurity threat to most U.K. organizations," Oswald said in a keynote speech. "In recent months, law enforcement has dramatically reduced the global threat from ransomware by disrupting LockBit's activities and just last week unmasking and sanctioning one of its Russia-based leaders." Nevertheless, officials continue to urge organizations to hone their defenses and constantly keep improving their resilience capabilities, to better repel hack attacks and avoid ever having to even consider paying a ransom. "The NCSC does not encourage, endorse or condone paying ransoms, and it's a dangerous misconception that doing so will make an incident go away or free victims of any future headaches," Oswald said. "In fact, every ransom that is paid signals to criminals that these attacks bear fruit and are worth doing."



Quote for the day:

''The distance between insanity and genius is measured only by success.'' -- Bruce Feirstein

Daily Tech Digest - May 14, 2024

Transforming 6G experience powered by AI/ML

While speed has been the driving force behind previous generations, 6G redefines the game. Yes, it will be incredibly fast, but raw bandwidth is just one piece of the puzzle. 6G aims for seamless and consistent connectivity everywhere. ... This will bridge the digital divide and empower remote areas to participate fully in the digital age. 6G networks will be intelligent entities, leveraging AI and ML algorithms to become: Adaptive: The network will constantly analyze traffic patterns, user demands, and even environmental factors. Based on this real-time data, it will autonomously adjust configurations, optimize resource allocation, and predict user needs for a truly proactive experience. Imagine a network that anticipates your VR gaming session and seamlessly allocates the necessary resources before you even put on the headset. Application-Aware: Gone are the days of one-size-fits-all connectivity. 6G will cater to a diverse range of applications, each with distinct requirements. The network will intelligently recognize the type of traffic – a high-resolution video stream, a critical IoT sensor reading, or a real-time AR overlay – and prioritize resources accordingly. This ensures flawless performance for all users, regardless of their activity.


How data centers can simultaneously enable AI growth and ESG progress

Unlocking AI’s full potential may require organizations to make significant concessions on their ESG goals unless the industry drastically reduces AI’s environmental footprint. This means all data center operators - including both in-house teams and third-party partners - must adopt innovative data center cooling capabilities that can simultaneously improve energy efficiency and reduce carbon emissions. The need for HPC capabilities is not unique to AI. Grid computing, clustering, and large-scale data processing are among the technologies that depend on HPC to facilitate distributed workloads, coordinate complex tasks, and handle immense amounts of data across multiple systems. However, with the rapid rise of AI, the demand for HPC resources has surged, intensifying the need for advanced infrastructure, energy efficiency, and sustainable solutions to manage the associated power and cooling requirements. In particular, the large graphics processing units (GPUs) required to support complex AI models and deep learning algorithms generate more heat than traditional CPUs, creating new challenges for data center design and operation. 


Cutting the cord: Can Air-Gapping protect your data?

The first challenge is keeping systems up to date. Software requires patching and upgrading as bugs are found and new features needed. An Air-Gapped system can be updated via USB sticks and CD-Roms, but this is (a) time consuming and (b) introduces a partial connection with the outside world. Chris Hauk, Consumer Privacy Advocate at Pixel Privacy, has observed the havoc this can cause. “Yes, hardware and software both can be easily patched just like we did back in the day, before the internet,” says Hauk. “Patches can be ‘sneakernetted’ to machines on a USB stick. Unfortunately, USB sticks can be infected by malware if the stick used to update systems was created on a networked computer. “The Stuxnet worm, which did damage to Iran’s nuclear program and believed to have been created by the United States and Israel, was malware that targeted Air-Gapped systems, so no system that requires updating is absolutely safe from attacks, even if they are Air-Gapped.” The Air-Gap may suffer breaches. Users may want to take data home or have another reason to access systems. A temporary connection to the outside world, even via a USB stick, poses a serious risk.


Delivering Software Securely: Techniques for Building a Resilient and Secure Code Pipeline

Resilience in a pipeline embodies the system's ability to deal with unexpected events such as network latency, system failures, and resource limitations without causing interruptions. The aim is to design a pipeline that not only provides strength but also maintains self-healing and service continuity. By doing this, you can ensure that the development and deployment of applications can withstand the inevitable failures of any technical environment. ... To introduce fault tolerance into your pipeline, you have to diversify resources and automate recovery processes. ... When it comes to disaster recovery, it is crucial to have a well-organized plan that covers the procedures for data backup, resource provision, and restoration operations. This could include automating backups and using CloudFormation scripts to provision the infrastructure needed quickly. ... How can we ensure that these resilience strategies are not only theoretically effective but also practically effective? Through careful testing and validation. Use chaos engineering principles by intentionally introducing defects into the system to ensure that the pipeline responds as planned. 


Cinterion IoT Cellular Modules Vulnerable to SMS Compromise

Cinterion cellular modems are used across a number of industrial IoT environments, including in the manufacturing and healthcare as well as financial services and telecommunications sectors. Telit Cinterion couldn't be immediately reached for comment about the status of its patching efforts or mitigation advice. Fixing the flaws would require the manufacturer of any specific device that includes a vulnerable Cinterion module to release a patch. Some devices, such as insulin monitors in hospitals or the programmable logic controllers and supervisory control and data acquisition systems used in industrial environments, might first need to be recertified with regulators before device manufacturers can push patches to users. The vulnerabilities pose a supply chain security risk, said Evgeny Goncharov, head of Kaspersky's ICS CERT. "Since the modems are typically integrated in a matryoshka-style within other solutions, with products from one vendor stacked atop those from another, compiling a list of affected end products is challenging," he said. 


Automotive Radar Testing and Big Data: Safeguarding the Future of Driving

In radar EOL testing, one of the key verification parameters is the radar cross-section (RCS) detection accuracy, which represents the size of an object. Unlike passive objects that have fixed RCS, RTS allows the simulation of various levels of RCS, echoing a desired object size for radar detection. While RTS systems offer versatility for radar testing, they present challenges to overcome. One such challenge is the sensitivity of the system’s millimeter-wave (mmWave) components to temperature variations, which can significantly impact the ability to accurately simulate RCS values. Therefore, controlling the ambient temperature in a testing setup is important to ensuring that the RTS replicates the RCS expected for a given object size. Furthermore, the repercussions extend beyond the immediate operational setbacks with. the need to scrap a number of radar faulty module units. Not only does this represent a direct monetary loss and the overall profit margin, but it also contributes to waste and environmental concerns. All these adverse outcomes, from reduced output capacity to financial losses and environmental impact, highlight the critical importance of integrating analytics software into an automotive radar EOL testing solution. 


Nvidia teases quantum accelerated supercomputers

The company revealed that sites in Germany, Japan, and Poland will use the platform to power quantum processing units (QPU) in their high performance computing systems. “Quantum accelerated supercomputing, in which quantum processors are integrated into accelerated supercomputers, represents a tremendous opportunity to solve scientific challenges that may otherwise be out of reach,” said Tim Costa, director, Quantum and HPC at Nvidia. “But there are a number of challenges between us, today, and useful quantum accelerated supercomputing. Today’s qubits are noisy and error prone. Integration with HPC systems remains unaddressed. Error correction algorithms and infrastructure need to be developed. And algorithms with exponential speed up actually need to be invented, among many other challenges.” ... “But another open frontier in quantum remains,” Costa said. “And that’s the deployment of quantum accelerated supercomputers – accelerated supercomputers that integrate a quantum processor to perform certain tasks that are best suited to quantum in collaboration with and supported by AI supercomputing. We’re really excited to announce today the world’s first quantum accelerated supercomputers.”


Tailoring responsible AI: Defining ethical guidelines for industry-specific use

As AI becomes increasingly embedded in business operations, organizations must ask themselves how to prepare for and prevent AI-related failures, such as AI-powered data breaches. AI tools are enabling hackers to develop highly effective social engineering attacks. Right now, having a strong foundation in place to protect customer data is a good place to start. Ensuring third-party AI model providers don’t use your customers’ data also adds protection and control. There are also opportunities for AI to help strengthen crisis management. The first relates to security crises, such as outages and failures, where AI can identify the root of an issue faster. AI can quickly sift through a ton of data to find the “needle in the haystack” that points to the source of the attack or the service that failed. It can also surface relevant data for you much faster using conversational prompts. In the future, an analyst might be able to ask an AI chatbot that’s embedded in its security framework questions about suspicious activity, such as, “What can you tell me about where this traffic originated from?” Or, “What kind of host was this on?”


Taking a ‘Machine-First’ Approach to Identity Management

With microservices, machine identities are proliferating at an alarming rate. Cyberark has reported that the ratio of machine identities to humans in organizations is 45 to 1. At the same time, 87% of respondents in its survey said they store secrets in multiple places across DevOps environments. Curity’s Michal Trojanowski previously wrote about the complex mesh of services comprising an API, adding that securing them is not just about authenticating the user. “A service that receives a request should validate the origin of the request. It should verify the external application that originally sent the request and use an allowlist of callers. ... Using agentless scanning of the identity repositories engineers are using and log analysis, the company first maps all the non-human identities throughout the infrastructure — Kubernetes, databases, applications, workloads, and servers. It creates what it calls attribution— a strong context of which workloads and which humans use each identity, including an understanding its dependencies. Mapping ownership of the various identities also is key. “Think about organizations that have thousands of developers. Security teams sometimes find issues but don’t know how to solve them because they don’t know who to talk with,” Apelblat said.


The limitations of model fine-tuning and RAG

Several factors limit what LLMs can learn via RAG. The first factor is the token allowance. With the undergrads, I could introduce only so much new information into a timed exam without overwhelming them. Similarly, LLMs tend to have a limit, generally between 4k and 32k tokens per prompt, which limits how much an LLM can learn on the fly. The cost of invoking an LLM is also based on the number of tokens, so being economical with the token budget is important to control the cost. The second limiting factor is the order in which RAG examples are presented to the LLM. The earlier a concept is introduced in the example, the more attention the LLM pays to it in general. While a system could reorder retrieval augmentation prompts automatically, token limits would still apply, potentially forcing the system to cut or downplay important facts. To address that risk, we could prompt the LLM with information ordered in three or four different ways to see if the response is consistent. ... The third challenge is to execute retrieval augmentation such that it doesn’t diminish the user experience. If an application is latency sensitive, RAG tends to make latency worse. 



Quote for the day:

"What you do makes a difference, and you have to decide what kind of difference you want to make." -- Jane Goodall

Daily Tech Digest - May 13, 2024

Why AI Won’t Take Over The World Anytime Soon

The majority of AI systems we encounter daily are examples of "narrow AI." These systems are masters of specialization, adept at tasks such as recommending your next movie on Netflix, optimizing your route to avoid traffic jams or even more complex feats like writing essays or generating images. Despite these capabilities, they operate under strict limitations, designed to excel in a particular arena but incapable of stepping beyond those boundaries. Even the generative AI tools that are dazzling us with their ability to create content across multiple modalities. They can draft essays, recognize elements in photographs, and even compose music. However, at their core, these advanced AIs are still just making mathematical predictions based on vast datasets; they do not truly "understand" the content they generate or the world around them. Narrow AI operates within a predefined framework of variables and outcomes. It cannot think for itself, learn beyond what it has been programmed to do, or develop any form of intention. Thus, despite the seeming intelligence of these systems, their capabilities remain tightly confined. 


Establishing a security baseline for open source projects

Transparency is in the spirit of open-source, and enhancing it within the community is a key goal of our organization. Currently, every OpenSSF project is required to have a security policy that provides clear directions on how vulnerabilities should be reported and how they will be responded to. The security baseline also requires that. The OpenSSF Best Practice Badge program and Scorecard report if a project has a vulnerability disclosure policy. The badge program passing level has been used by other Linux Foundation open-source projects as a criteria to become generally available. Open-source communities have been pushing the boundaries on SBOM to increase transparency in both open-source and closed source software. However, there have been challenges with SBOM consumption due to data quality and interoperability issues. Recently, OpenSSF, along with CISA and DHS S&T, took steps to address this challenge by releasing Protobom, an open-source software supply chain tool that enables all organizations, including system administrators and software development communities, to read and generate SBOMs and file data, as well as translate this data across standard industry SBOM formats.


Overcoming Resistance to DevOps Adoption

A main challenge in transitioning from traditional software development approaches is establishing a DevOps culture. For years, development teams have worked in siloes, leading to bureaucracy and departmental barriers that hindered agility and collaboration. These teams are required to learn new tools and processes as part of adopting agile development methodologies, creating a cultural shift and resistance to change. Most practitioners have cited cultural change as a barrier to DevOps adoption. Soumik Mukherjee, senior manager, platform engineering (global), Ascendion, said he confronted these challenges by starting small with manageable projects, celebrating early wins to build momentum, and fostering open communication and collaboration across teams. "We invest in upskilling our employees and continuously track progress to identify and address any bottlenecks. By breaking down silos and building a shared understanding, we create a collaborative environment where teams work together efficiently and effectively," Mukherjee said. Debashis Singh, CIO at Persistent Systems, said "Fostering a DevOps and DevSecOps culture and establishing a clear vision is akin to setting the North Star for everyone in the organization.


Charting India’s AI trajectory: Insights from World Economic Forum

The overarching theme for this year’s WEF focused on “Rebuilding Trust”, though the topic extends beyond fighting corruption in public institutions. Trust is critical to AI. Without trust in AI and its outputs, our goal of transforming economies with AI will be hard to achieve. The foundation of this trust starts with high-fidelity, trusted, and secure input data. We must center security and compliance when developing AI applications to combat these concerns. Adoption will naturally accelerate when leaders can trust that AI applications are secure and compliant. No company can risk missing out on the productivity gains that AI offers. As Sam Altman said at Davos, “[GenAI] will change the world much less than we all think and it will change jobs much less than we all think. We will all operate at a… higher level of abstraction… [and] have access to a lot more capability.” India’s AI journey and progress took the spotlight at WEF and was center stage at various bustling technology discussions. The country’s unwavering commitment to driving innovation and fostering growth is evident in the many success stories and examples shared at the forum. 


Don’t overlook the impact of AI on data management

While many organizations already understand the power of having clean data and clean ways of inputting that data, many fail to grasp that tools ready to help them with this process already exist and are already doing wonders for peers in your industry. One emerging tool for inputting data, that may surprise, is generative AI chatbots. With the advent of gen AI, a new breed of chatbots has emerged — ones that can conduct high-level conversations, resembling human interactions more closely than ever before. Not only can they understand customer queries, but they can input and collect data directly with business systems, efficiently handling forms and personalizing client profiles. Integrating such AI-driven chatbots isn’t just about cutting costs — it’s about revolutionizing customer engagement and driving new insights from every interaction. If the first step is automating data capture, chatbots can directly collect and process data from customers without human intervention. The chatbots can not only collect the data but they can also use it for cross selling. 


Linux backdoor threat is a wake-up call for IoT

This hack should serve as a wake-up call that not every device warrants Linux. Basic devices like sensors or monitors – and, yes, even doorbells – usually serve one function at a time. They can therefore benefit from the resource efficiency and focused functionality of RTOS. In Linux and other general-purpose operating systems, programs are loaded dynamically after boot, often with the ability to run in separate memory and file spaces under different user accounts. This isolation is beneficial when running multiple applications concurrently on a shared server, as one user’s programs cannot interfere with another’s, and hardware access is shared equally through the operating system. In contrast, RTOS operates by compiling applications and tasks directly into the system with minimal separation between memory spaces and hardware. Since the primary goal of an IoT device is typically to serve a single application, possibly divided into multiple tasks, this lack of separation is not an issue. Additionally, because the application is compiled into the RTOS, it is ready to run after a very short boot and initialization process.


AI At The Edge Is Different From AI In The Data Center

In manufacturing, locally run AI models can rapidly interpret data from sensors and cameras to perform vital tasks. For example, automakers scan their assembly lines using computer vision to identify potential defects in a vehicle before it leaves the plant. In a use case like this, very low latency and always-on requirements make data movement throughout an extensive network impractical. Even small amounts of lag can impede quality assurance processes. On the other hand, low-power devices are ill-equipped to handle beefy AI workloads, such as training the models that computer vision systems rely on. Therefore, a holistic, edge-to-cloud approach combines the best of both worlds. Backend cloud instances provide the scalability and processing power for complex AI workloads, and front-end devices put data and analysis physically close together to minimize latency. For these reasons, cloud solutions, including those from Amazon, Google, and Microsoft, play a vital role. Flexible and performant instances with purpose-built CPUs, like the Intel Xeon processor family with built-in AI acceleration features, can tackle the heavy lifting for tasks like model creation.


Ask a Data Ethicist: What Happens When Language Becomes Data?

Natural language processing involves turning language into formats a machine can understand (numbers), before turning it back into our desired human output (text, code, etc). One of the first steps in the process of “datafying” language is to break it down into tokens. Tokens are typically a single word, at least in English – more on that in a minute. ... Tokens are important because they not only drive performance of the model they also drive training costs. AI companies charge developers by the token. English tends to be the most token-efficient language, making it economically advantageous to train on English language “data” versus, say, Burmese. This blog post by data scientist Yennie Jun goes into further details about how the process works in a very accessible way, and this tool she built allows you to select different languages along with different tokenizers to see exactly how many tokens are needed for each of the languages selected. NLP training techniques used in LLMs privilege the English language when it turns it into data for training, and penalize other languages, particularly low-resource languages. 


AI-powered XDR: The Answer to Network Outages and Security Threats

To overcome the limitations of standard XDR, organizations can choose XDR capabilities integrated within a SASE architecture. SASE consolidates all networking and security functions into a cohesive whole with single-pane-of-glass visibility. SASE-based, next-gen XDR can leverage SASE’s telemetry to inform an organization’s incident detection and response workflows. By leveraging native sensors, like NGFW, advanced threat prevention, SWG, and ZTNA (zero trust network architecture), that feed data into a unified data lake, SASE eliminates the need for data integration and normalization. It allows XDR to analyze raw data, which eliminates inaccuracies and gaps. ... AI and machine learning play a pivotal role in XDR capabilities. Advanced algorithms trained on vast amounts of data enable more accurate incident detection and correlation. However, only comprehensive, consistent, and high-quality data and events can train AI/ML algorithms to create quality XDR incidents and perform root-cause analysis. SASE converges petabytes of data from various native sensors into a single data lake for training advanced AI/ML models.


Is an AI Bubble Inevitable?

Forward-looking enterprise AI adopters are already hedging their bets by ensuring they have interpretable AI and traditional analytics on hand while they explore newer AI technologies with appropriate caution, Zoldi says. He notes that many financial services organizations have already pulled back from using GenAI, both internally and for customer-facing applications. "The fact that ChatGPT, for example, doesn't give the same answer twice is a big roadblock for banks, which operate on the principle of consistency." ... In the event of a market drawback, AI customers may revert to less sophisticated approaches instead of reevaluating their AI strategies, Amorim warns. "This could result in a setback for businesses that have invested heavily in AI, since they may be less inclined to explore its full potential or adapt to changing market dynamics." Just as the dot-com failure didn't permanently destroy the web, an AI industry collapse won't mark the end of AI. Zoldi believes there will eventually be a return to normal. "Companies that had a mature, responsible AI practice will come back to investing in continuing that journey," he notes.
 


Quote for the day:

"Without continual growth and progress, such words as improvement, achievement, and success have no meaning." -- Benjamin Franklin