Showing posts with label CDO. Show all posts
Showing posts with label CDO. Show all posts

Daily Tech Digest - March 19, 2025


Quote for the day:

“The only true wisdom is knowing that you know nothing.” -- Socrates


How AI is Becoming More Human-Like With Emotional Intelligence

The concept of humanizing AI is designing systems that can understand, interpret, and respond to human emotions in a way that feels more natural. It is making the AI efficient enough to pick up cues to read the room and react as a human would but in a polished way. ... It is only natural that a potential user will prefer to interact with someone who acknowledges the queries and engages with them like a human. AI that sounds and responds like a human helps build trust and rapport with users. ... AI that adapts based on mood and tone. You cannot keep sending automated messages to your users, especially to the ones who are irate. AI that sounds and responds like a human helps build trust and rapport with users ... The humanization of AI makes AI accessible and inclusive to all. Voice assistants and screen readers, AI-powered speech-to-text, and text-to-speech tools are some great examples of these fleets. ... As AI becomes more aware and powerful there are rising concerns about its ethical usage. There have to be checks in place that ensure AI doesn’t blatantly mimic human emotions to exploit users’ feelings. There should be a trigger warning for the users to know that they are dealing with machine-generated content. Businesses must ensure ethical AI development, prioritizing user trust and transparency systems should be programmed to respect user privacy and not manipulate users into making purchases or conversions.


Beyond Trends: A Practical Guide to Choosing the Right Message Broker

In distributed systems, messaging patterns define how services communicate and process information. Each pattern comes with unique requirements, such as ordering, scalability, error handling, or parallelism, which guide the selection of an appropriate message broker. ... The Event-Carried State Transfer (ECST) pattern is a design approach used in distributed systems to enable data replication and decentralized processing. In this pattern, events act as the primary mechanism for transferring state changes between services or systems. Each event includes all the necessary information (state) required for other components to update their local state without relying on synchronous calls to the originating service. By decoupling services and reducing the need for real-time communication, ECST enhances system resilience, allowing components to operate independently even when parts of the system are temporarily unavailable. ... The Event Notification Pattern enables services to notify other services of significant events occurring within a system. Notifications are lightweight and typically include just enough information (e.g., an identifier) to describe the event. To process a notification, consumers often need to fetch additional details from the source (and/or other services) by making API calls. 


Successful AI adoption comes down to one thing: Smarter, right-size compute

A common perception in the enterprise is that AI solutions require a massive investment right out of the gate, across the board, on hardware, software and services. That has proven to be one of the most common barriers to adoption — and an easy one to overcome, Balasubramanian says. The AI journey kicks off with a look at existing tech and upgrades to the data center; from there, an organization can start scaling for the future by choosing technology that can be right-sized for today’s problems and tomorrow’s goals. “Rather than spending everything on one specific type of product or solution, you can now right-size the fit and solution for the organizations you have,” Balasubramanian says. “AMD is unique in that we have a broad set of solutions to meet bespoke requirements. We have solutions from cloud to data center, edge solutions, client and network solutions and more. ... While both hardware and software are crucial for tackling today’s AI challenges, open-source software will drive true innovation. “We believe there’s no one company in this world that has the answers for every problem,” Balasubramanian says. “The best way to solve the world’s problems with AI is to have a united front, and to have a united front means having an open software stack that everyone can collaborate on. ...”


CDOs: Your AI is smart, but your ESG is dumb. Here’s how to fix it

Embedding sustainability into a data strategy requires a deliberate shift in how organizations manage, govern and leverage their data assets. CDOs must ensure that sustainability considerations are integrated into every phase of data decision-making rather than treating ESG as an afterthought or compliance requirement. A well-designed strategy can help organizations balance business growth with environmental, social and governance (ESG) responsibility while improving operational efficiency. ... Advanced analytics and AI can unlock new opportunities for sustainability. Predictive modeling can help companies optimize energy consumption, while AI-driven insights can identify supply chain inefficiencies that lead to excessive waste. For example, retailers are leveraging AI-powered demand forecasting to reduce overproduction and excess inventory, significantly cutting down carbon emissions and waste.  ... Creating a sustainability-focused data culture requires education and engagement across all levels of the organization. CDOs can implement ESG-focused data literacy programs to ensure that business leaders, data scientists and engineers understand the impact of their work on sustainability. Encouraging collaboration between data teams and sustainability departments ensures ESG considerations remain a priority throughout the data lifecycle.


Five Critical Shifts for Cloud Native at a Crossroads

General-purpose operating systems can become a Kubernetes bottleneck at scale. Traditional OS environments are designed for a wide range of use cases, carry unnecessary overhead and bring security risks when running cloud native workloads. Enterprises are increasingly instead turning to specialized operating systems that are purpose-built for Kubernetes environments, finding that this shift has advantages across security, reliability and operational efficiency. The security implications are particularly compelling. While traditional operating systems leave many potential entry points exposed, specialized cloud native operating systems take a radically different approach. ... Cost-conscious organizations (Is there another kind?) are discovering that running Kubernetes workloads solely in public clouds isn’t always the best approach. Momentum has continued to grow toward pursuing hybrid and on-premises strategies for greater control over both costs and capabilities. This shift isn’t just about cost savings, it’s about building infrastructure precisely tailored to specific workload requirements, whether that’s ultra-low latency for real-time applications or specialized configurations for AI/machine learning workloads.


Moving beyond checkbox security for true resilience

A threat-informed and risk-based approach is paramount in an era of perpetually constrained cybersecurity budgets. Begin by assessing the organization’s crown jewels – sensitive customer data, intellectual property, financial records, or essential infrastructure. These assets represent the core of the organization’s value and should demand the highest priority in protection.... Organizations frequently underestimate the risks from unmanaged devices, also called shadow IT, and within their software supply chain. As reliance on third-party software and libraries embedded within the organization and in-house apps deepens, the attack surface becomes a constantly shifting landscape with hidden vulnerabilities. Unmanaged devices and unauthorized applications are equally problematic and can introduce unexpected and substantial risks. To address these blind spots, organizations must implement rigorous vendor risk management programs, track IT assets, and enforce application control policies. These often-overlooked elements create critical blind spots, allowing attackers to exploit vulnerabilities that existing security measures might miss. ... Regardless of the trends, CISOs should assess the specific threats relative to their organization and ensure that foundational security measures are in place.


How to simplify app migration with generative AI tools

Reviewing existing documentation and interviewing subject matter experts is often the best starting point to prepare for an application migration. Understanding the existing system’s business purposes, workflows, and data requirements is essential when seeking opportunities for improvement. This outside-in review helps teams develop a checklist of which requirements are essential to the migration, where changes are needed, and where unknowns require further discovery. Furthermore, development teams should expect and plan a change management program to support end users during the migration. ... Technologists will also want to do an inside-out analysis, including performing a code review, diagraming the runtime infrastructure, conducting a data discovery, and analyzing log files or other observability artifacts. Even more important may be capturing the dependencies, including dependent APIs, third-party data sources, and data pipelines. This architectural review can be time-consuming and often requires significant technical expertise. Using genAI can simplify and accelerate the process. “GenAI is impacting app migrations in several ways, including helping developers and architects answer questions quickly regarding architectural and deployment options for apps targeted for migration,” says Rob Skillington, CTO & co-founder of Chronosphere.


How to Stop Expired Secrets from Disrupting Your Operations

Unlike human users, the credentials used by NHIs often don’t receive expiration reminders or password reset prompts. When a credential quietly reaches the end of its validity period, the impact can be immediate and severe: application failures, broken automation workflows, service downtime, and urgent security escalations. And unlike the food in your fridge, there’s no nosy relative to point out that your secrets have gone bad. ... While TLS/SSL certificate expiration often gets the most attention due to its visible impact on websites, many types of machine credentials have built-in expiration. API keys silently time out in backend services, OAuth tokens reach their limits, IAM role sessions terminate, Kubernetes service account tokens expire, and database connection credentials become invalid. ... The primary consequence of an expired credential is a failed authentication attempt. At first glance, this might seem like a simple fix – just replace the credential and restart the service. But in reality, identifying and resolving an expired credential issue is rarely straightforward. Consider a cloud-native application that relies on multiple APIs, internal microservices, and external integrations. If an API key or OAuth token used by a backend service expires, the application might return unexpected errors, time out, or degrade in ways that aren’t immediately obvious. 


Role of Interconnects in GenAI

The emergence of High-Performance Computing (HPC) demanded a leap in interconnect capabilities. InfiniBand entered the scene, offering significantly higher throughput and lower latency compared to existing technologies. It became the cornerstone of data centers and large-scale computing environments, enabling the rapid exchange of massive datasets required for complex simulations and scientific computations. Simultaneously, the introduction of Peripheral Component Interconnect Express (PCIe) revolutionized off-chip communication. ... the scalability of GenAI models, particularly large language models, relies heavily on robust interconnects. These systems facilitate the distribution of computational load across multiple processors and machines, enabling the training and deployment of increasingly complex models. This scalability is achieved through efficient network topologies that minimize communication bottlenecks, allowing for both vertical and horizontal scaling. Parallel processing, a cornerstone of GenAI training, is also dependent on effective interconnects. Model and data parallelism require seamless communication and synchronization between processors working on different segments of data or model components. Interconnects ensure that these processors can exchange information efficiently, maintaining consistency and accuracy throughout the training process.


That breach cost HOW MUCH? How CISOs can talk effectively about a cyber incident’s toll

Many CISOs struggle to articulate the financial impact of cyber incidents. “The role of a CISO is really interesting and uniquely challenging because they have to have one foot in the technical world and one foot in the executive world,” Amanda Draeger, principal cybersecurity consultant at Liberty Mutual Insurance, tells CSO. “And that is a difficult challenge. Finding people who can balance that is like finding a unicorn.” ... Quantifying the costs of an incident in advance is an inexact art greatly aided by tabletop exercises. “The best way in my mind to flush all of this out is by going through a regular incident response tabletop exercise,” Gary Brickhouse, CISO at GuidePoint Security, tells CSO. “People know their roles so that when it does happen, you’re prepared.” It also helps to develop an incident response (IR) plan and practice it frequently. “I highly recommend having an incident response plan that exists on paper,” Draeger says. “I mean literal paper so that when your entire network explodes, you still have a list of phone numbers and contacts and something to get you started.” Not only does the incident response plan lead to better cost estimates, but it will also lead to a quicker return of network functions. “Practice, practice, practice,” Draeger says. 

Daily Tech Digest - July 18, 2024

The Critical Role of Data Cleaning

Data cleaning is a crucial step that eliminates irrelevant data, identifies outliers and duplicates, and fixes missing values. It involves removing errors, inconsistencies, and, sometimes, even biases from raw data to make it usable. While buying pre-cleaned data can save resources, understanding the importance of data cleaning is still essential. Inaccuracies can significantly impact results. In many cases, before the removal of low-value data, the rest is still hardly usable. Cleaning works as a filter, ensuring that data passes through to the next step, which is more refined and relevant to your goals. ... At its core, data cleaning is the backbone of robust and reliable AI applications. It helps guard against inaccurate and biased data, ensuring AI models and their findings are on point. Data scientists depend on data cleaning techniques to transform raw data into a high-quality, trustworthy asset. ... Interestingly, LLMs that have been properly trained on clean data can play a significant role in the data cleaning process itself. Their advanced capabilities enable LLMs to automate and enhance various data cleaning tasks, making the process more efficient and effective.


What Is Paravirtualization?

Paravirtualization builds upon traditional virtualization by offering extra services, improved capabilities or better performance to guest operating systems. With traditional virtualization, organizations abstract the underlying resources via virtual machines to the guest so they can run them as is, says Greg Schulz, founder of the StorageIO Group, an IT industry analyst consultancy. However, those virtual machines use all of the resources assigned to them, meaning there is a great deal of idle time, even though it doesn’t appear so, according to Kalvar. Paravirtualization uses software instruction to dynamically size and resize those resources, Kalvar says, turning VMs into bundles of resources. They are managed by the hypervisor, a software component that manages multiple virtual machines in a computer. ... One of the biggest advantages of paravirtualization is that it is typically more efficient than full virtualization because the hypervisor can closely manage and optimize resources between different operating systems. Users can manage the resources they consume on a granular basis. “I’m not buying an hour of a server, I’m buying seconds of resource time,” Kalvar says. 


Leaked Access Keys: The Silent Revolution in Cloud Security

The challenge for service accounts is that MFA does not work, and network-level protection (IP filtering, VPN tunneling, etc.) is not consequently applied, primarily due to complexity and costs. Thus, service account key leaks often enable hackers to access company resources. While phishing is unusual in the context of service accounts, leakages are frequently the result of developers posting them (unintentionally) online, often in combination with code fragments that unveil the user to whom they apply. ... Now, Google has changed the game with its recent policy change. If an access key appears in a public GitHub repository, GCP deactivates the key, no matter whether applications crash. Google's announcement marks a shift in the risk and priority tango. Gone are the days when patching vulnerabilities could take days or weeks. Welcome to the fast-paced cloud era. Zero-second attacks after credential leakages demand zero-second fixing. Preventing an external attack becomes more important than avoiding crashing customer applications – that is at least Google's opinion. 


Juniper advances AI networking software with congestion control, load balancing

On the load balancing front, Juniper has added support for dynamic load balancing (DLB) that selects the optimal network path and delivers lower latency, better network utilization, and faster job completion times. From the AI workload perspective, this results in better AI workload performance and higher utilization of expensive GPUs, according to Sanyal. “Compared to traditional static load balancing, DLB significantly enhances fabric bandwidth utilization. But one of DLB’s limitations is that it only tracks the quality of local links instead of understanding the whole path quality from ingress to egress node,” Sanyal wrote. “Let’s say we have CLOS topology and server 1 and server 2 are both trying to send data called flow-1 and flow-2, respectively. In the case of DLB, leaf-1 only knows the local links utilization and makes decisions based solely on the local switch quality table where local links may be in perfect state. But if you use GLB, you can understand the whole path quality where congestion issues are present within the spine-leaf level.”


Impact of AI Platforms on Enhancing Cloud Services and Customer Experience

AI platforms enable businesses to streamline operations and reduce costs by automating routine tasks and optimizing resource allocation. Predictive analytics, powered by AI, allows for proactive maintenance and issue resolution, minimizing downtime and ensuring continuous service availability. This is particularly beneficial for industries where uninterrupted access to cloud services is critical, such as finance, healthcare, and e-commerce. ... AI platforms are not only enhancing backend operations but are also revolutionizing customer interactions. AI-driven customer service tools, such as chatbots and virtual assistants, provide instant support, personalized recommendations, and seamless user experiences. These tools can handle a wide range of customer queries, from basic information requests to complex problem-solving, thereby improving customer satisfaction and loyalty. The efficiency and round-the-clock availability of AI-driven tools make them invaluable for businesses. By the year 2025, it is expected that AI will facilitate around 95% of customer interactions, demonstrating its growing influence and effectiveness.


2 Essential Strategies for CDOs to Balance Visible and Invisible Data Work Under Pressure

Short-termism under pressure is a common mistake, resulting in an unbalanced strategy. How can we, as data leaders, successfully navigate such a scenario? “Working under pressure and with limited trust from senior management can force first-time CDOs to commit to an unbalanced strategy, focusing on short-term, highly visible projects – and ignore the essential foundation.” ... The desire to invest in enabling topics stems from the balance between driving and constraining forces. The senior management tends to ignore enabling topics because they rarely directly contribute to the bottom line; they can be a black box to a non-technical person and require multiple teams to collaborate effectively. On the other hand, Anne knew that the same people eagerly anticipated the impact of advanced analytics such as GenAI and were worried about potential regulatory risks. With the knowledge of the key enabling work packages and the motivating forces at play, Anne has everything she needs to argue for and execute a balanced long-term data strategy that does not ignore the “invisible” work required.


Gen AI Spending Slows as Businesses Exercise Caution

Generative AI has advanced rapidly over the past year, and organizations are recognizing its potential across business functions. But businesses have now taken a cautious stance regarding gen AI adoption due to steep implementation costs and concerns related to hallucinations. ... This trend reflects a broader shift away from the AI hype, and while businesses acknowledge the potential of this technology, they are also wary of the associated risks and costs, according to Michael Sinoway, CEO, Lucidworks. "The flattened spending suggests a move toward more thoughtful planning. This approach ensures AI adoption delivers real value, balancing competitiveness with cost management and risk mitigation," he said. ... Concerns regarding implementation costs, accuracy and data security have increased considerably in 2024. The number of business leaders with concerns related to implementation costs has increased 14-fold and those related to response accuracy have grown fivefold. While concerns about data security have increased only threefold, it remains the biggest worry.


CIOs are stretched more than ever before — and that’s a good thing

“Many CIOs have built years of credibility and trust by blocking and tackling the traditional responsibilities of the role,” she adds. “They’re now being brought to the conversation as business leaders to help the organization think through transformational priorities because they’re functional experts like any other executive in the C-suite.” ... “Boards want technology to improve the top and bottom line, which can be a tough balance, even if it’s one that CIOs are getting used to managing,” says Nash Squared’s White. “On the one hand, they’re being asked to promote innovation and help generate revenue, and on the other, they’re often charged with governance and security, too.” The importance of technology will only continue to increase going forward as well. Gen AI, for example, will make it possible to boost productivity while reducing costs. CyberArk’s Grossman expects the central role of digital leaders in exploiting these emerging technologies will mean high-level CIOs will be even more important in the future.


What Is a Sovereign Cloud and Who Truly Benefits From It?

A sovereign cloud is a cloud computing environment designed to help organizations comply with regulatory rules established by a particular government. This often entails ensuring that data stored within the cloud environment remains within a specific country. But it can also involve other practices, as we explain below. ... For one thing, cost. In general, cloud computing services on a sovereign cloud cost more than their equivalents on a generic public cloud. The exact pricing can vary widely depending on a number of factors, such as which cloud regions you select and which types of services you use, but in general, expect to pay a premium of at least 15% to use a sovereign cloud. A second challenge of using sovereign clouds is that in some cases your organization must undergo a vetting process to use them because some sovereign cloud providers only make their solutions available to certain types of organizations — often, government agencies or contractors that do business with them. This means you can't just create a sovereign cloud account and start launching workloads in a matter of minutes, as you could in a generic public cloud.


Securing datacenters may soon need sniffer dogs

So says Len Noe, tech evangelist at identity management vendor CyberArk. Noe told The Register he has ten implants – passive devices that are observable with a full body X-ray, but invisible to most security scanners. Noe explained he's acquired swipe cards used to access controlled premises, cloned them in his implants, and successfully entered buildings by just waving his hands over card readers. ... Noe thinks hounds are therefore currently the only reliable means of finding humans with implants that could be used to clone ID cards. He thinks dogs should be considered because attackers who access datacenters using implants would probably walk away scot-free. Noe told The Register that datacenter staff would probably notice an implant-packing attacker before they access sensitive areas, but would then struggle to find grounds for prosecution because implants aren't easily detectable – and even if they were the information they contain is considered medical data and is therefore subject to privacy laws in many jurisdictions.



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree

Daily Tech Digest - July 13, 2024

Work in the Wake of AI: Adapting to Algorithmic Management and Generative Technologies

Current legal frameworks are struggling to keep pace with the issues arising from algorithmic management. Traditional employment laws, such as those concerning unfair dismissal, often do not extend protections to “workers” as a distinct category. Furthermore, discrimination laws require proof that the discriminatory behaviour was due or related to the protected characteristic, which is difficult to ascertain and prove with algorithmic systems. To mitigate these issues, the researchers recommend a series of measures. These include ensuring algorithmic systems respect workers’ rights, granting workers the right to opt out of automated decisions such as job termination, banning excessive data monitoring and establishing the right to a human explanation for decisions made by algorithms. ... Despite the rapid deployment of GenAI and the introduction of policies around its use, concerns about misuse are still prevalent among nearly 40% of tech leaders. While recognising AI’s potential, 55% of tech leaders have yet to identify clear business applications for GenAI beyond personal productivity enhancements, and budget constraints remain a hurdle for some.


The rise of sustainable data centers: Innovations driving change

Data centers contribute significantly to global carbon emissions, making it essential to adopt measures that reduce their carbon footprint. Carbon usage effectiveness (CUE) is a metric used to assess a data center's carbon emissions relative to its energy consumption. By minimizing CUE, data centers can significantly lower their environmental impact. ... Cooling is one of the largest energy expenses for data centers. Traditional air cooling systems are often inefficient, prompting the need for more advanced solutions. Free cooling, which leverages outside air, is a cost-effective method for data centers in cooler climates. Liquid cooling, on the other hand, uses water or other coolants to transfer heat away from servers more efficiently than air. ... Building and retrofitting data centers sustainably involves adhering to green building certifications like Leadership in Energy and Environmental Design (LEED) and Building Research Establishment Environmental Assessment Method (BREEAM). These certifications ensure that buildings meet high environmental performance standards.


How AIOps Is Poised To Reshape IT Operations

A meaningfully different, as yet underutilized, high-value data set can be derived from the rich, complex interactions of information sources and users on the network, promising to triangulate and correlate with the other data sets available, elevating their combined value to the use case at hand. The challenge in leveraging this source is that the raw traffic data is impossibly massive and too complex for direct ingestion. Further, even compressed into metadata, without transformation, it becomes a disparate stream of rigid, high-cardinality data sets due to its inherent diversity and complexity. A new breed of AIOps solutions is poised to overcome this data deficiency and transform this still raw data stream into refined collections of organized data streams that are augmented and edited through intelligent feature extraction. These solutions use an adaptive AI model and a multi-step transformation sequence to work as an active member of a larger AIOps ecosystem by harmonizing data feeds with the workflows running on the target platform, making it more relevant and less noisy.


Addressing Financial Organizations’ Digital Demands While Avoiding Cyberthreats

The financial industry faces a difficult balancing act, with multiple conflicting priorities at the forefront. Organizations must continually strengthen security around their evolving solutions to keep up in an increasingly competitive and fast-moving landscape. But while strong security is a requirement, it cannot impact usability for customers or employees in an industry where accessibility, agility and the overall user experience are key differentiators. One of the best options to balancing these priorities is the utilization of secure access service edge (SASE) solutions. This model integrates several different security features such as secure web gateway (SWG), zero-trust network access (ZTNA), next-generation firewall (NGFW), cloud access security broker (CASB), data loss prevention (DLP) and network management functions, such as SD-WAN, into a single offering delivered via the cloud. Cloud-based delivery enables financial organizations to easily roll out SASE services and consistent policies to their entire network infrastructure, including thousands of remote workers scattered across various locations, or multiple branch offices to protect private data and users, as well as deployed IoT devices.


Three Signs You Might Need a Data Fabric

One of the most significant challenges organizations face is data silos and fragmentation. As businesses grow and adopt new technologies, they often accumulate disparate data sources across different departments and platforms. These silos make it tougher to have a holistic view of your organization's data, resulting in inefficiencies and missed opportunities. ... You understand that real-time analytics is crucial to your organization’s success. You need to respond quickly to changing market conditions, customer behavior, and operational events. Traditional data integration methods, which often rely on batch processing, can be too slow to meet these demands. You need real-time analytics to:Manage the customer experience. If enhancing a customer’s experience through personalized and timely interactions is a priority, real-time analytics is essential. Operate efficiently. Real-time monitoring and analytics can help optimize operations, reduce downtime, and improve overall efficiency. Handle competitive pressure. Staying ahead of competitors requires quick adaptation to market trends and consumer demands, which is facilitated by real-time insights.


The Tension Between The CDO & The CISO: The Balancing Act Of Data Exploitation Versus Protection

While data delivers a significant competitive advantage to companies when used appropriately, without the right data security measures in place it can be misused. This not only erodes customers’ trust but also puts the company at risk of having to pay penalties and fines for non-compliance with data security regulations. As data teams aim to extract and exploit data for the benefit of the organisation, it is important to note that not all data is equal. As such a risk-based approach must be in place to limit access to sensitive data across the organisation. In doing this the IT system will have access to the full spectrum of data to join and process the information, run through models and identify patterns, but employees rarely need access to all this detail. ... To overcome the conflict of data exploitation versus security and deliver a customer experience that meets customer expectations, data teams and security teams need to work together to achieve a common purpose and align on the culture. To achieve this each team needs to listen to and understand their respective needs and then identify solutions that work towards helping to make the other team successful.


Content Warfare: Combating Generative AI Influence Operations

Moderating such enormous amounts of content by human beings is impossible. That is why tech companies now employ artificial intelligence (AI) to moderate content. However, AI content moderation is not perfect, so tech companies add a layer of human moderation for quality checks to the AI content moderation processes. These human moderators, contracted by tech companies, review user-generated content after it is published on a website or social media platform to ensure it complies with the “community guidelines” of the platform. However, generative AI has forced companies to change their approach to content moderation. ... Countering such content warfare requires collaboration across generative AI companies, social media platforms, academia, trust and safety vendors, and governments. AI developers should build models with detectable and fact-sensitive outputs. Academics should research the mechanisms of foreign and domestic influence operations emanating from the use of generative AI. Governments should impose restrictions on data collection for generative AI, impose controls on AI hardware, and provide whistleblower protection to staff working in the generative AI companies. 


OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

OpenAI isn't alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI's system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don't yet exist. OpenAI's classification system also somewhat resembles Anthropic's "AI Safety Levels" (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic's ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to "systems that show early signs of dangerous capabilities"), while OpenAI's levels track general capabilities. However, any AI classification system raises questions about whether it's possible to meaningfully quantify AI progress and what constitutes an advancement. The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI's potentially risk fueling unrealistic expectations.


White House Calls for Defending Critical Infrastructure

The memo encourages federal agencies "to consult with regulated entities to establish baseline cybersecurity requirements that can be applied across critical infrastructures" while maintaining agility and adaptability to mature with the evolving cyberthreat landscape. ONCD and OMB also urged agencies and federal departments to study open-source software initiatives and the benefits that can be gained by establishing a governance function for open-source projects modeled after the private sector. Budget submissions should identify existing departments and roles designed to investigate, disrupt and dismantle cybercrimes, according to the memo, including interagency task forces focused on combating ransomware infrastructure and the abuse of virtual currency. Meanwhile, the administration is continuing its push for agencies to only use software provided by developers who can attest their compliance with minimum secure software development practices. The national cyber strategy - as well as the joint memo - directs agencies to "utilize grant, loan and other federal government funding mechanisms to ensure minimum security and resilience requirements" are incorporated into critical infrastructure projects.


Unifying Analytics in an Era of Advanced Tech and Fragmented Data Estates

“Data analytics has a last-mile problem,” according to Alex Gnibus, technical product marketing manager, architecture at Alteryx. “In shipping and transportation, you often think of the last-mile problem as that final stage of getting the passenger or the delivery to its final destination. And it’s often the most expensive and time-consuming part.” For data, there is a similar problem; when putting together a data stack, enabling the business at large to derive value from the data is a key enabler—and challenge—of a modern enterprise. Achieving business value from data is the last mile, which is made difficult by complex, numerous technologies that are inaccessible to the final business user. Gnibus explained that Alteryx solves this problem by acting as the “truck” that delivers tangible business value from proprietary data, offering data discovery, use case identification, preparation and analysis, insight-sharing, and AI-powered capabilities. Acting as the easy-to-use interface for a business’ data infrastructure, Alteryx is the AI platform for large-scale enterprise analytics that offers no-code, drag-and-drop functionality that works with your unique data framework configuration as it evolves.



Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel

Daily Tech Digest - July 02, 2024

The Changing Role of the Chief Data Officer

The chief data officer originally played more “defense” than “offense.” The position focused on data security, fraud protection, and Data Governance, and tended to attract people from a technical or legal background. CDOs now may take on a more offensive strategy, proactively finding ways to extract value from the data for the benefit of the wider business, and may come from an analytics or business background. Of course, in reality, the choice between offense and defense is a false one, as companies must do both. ... Major trends for CDOs in the future will include incorporating cutting-edge technology, such as generative AI, large language models, machine learning, and increasingly sophisticated forms of automation. The role is also spreading to a wider variety of industry sectors, such as healthcare, the private sector, and higher education. One of the major challenges is already in progress: responding to the COVID-19 pandemic. The pandemic hugely shook global supply chains, created new business markets, and also radically changed the nature of business itself. 


Duplicate Tech: A Bottom-Line Issue Worth Resolving

The patchwork nature of combined technologies can hinder processes and cause data fragmentation or loss. Moreover, differing cybersecurity capabilities among technologies can expose the organization to increased risk of cyberattacks, as older or less secure systems may be more vulnerable to breaches. Retaining multiple technologies may initially seem prudent in a merger or acquisition, but ultimately it proves detrimental. The drawbacks — from duplicated data and disconnected processes to inefficiencies and security vulnerabilities — far outweigh any perceived benefits, highlighting the critical need for streamlined, unified IT systems. ... There are compelling reasons to remove the dead weight of duplicate technologies and adopt a singular technology. The first step in eliminating tech redundancy is to evaluate existing technologies to determine which tools best align with current and future business needs. A collaborative approach with all relevant stakeholders is recommended to ensure the chosen solution supports organizational goals and avoids unnecessary repetition.


Disability community has long wrestled with 'helpful' technologies—lessons for everyone in dealing with AI

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone. This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles. ... Disability advocates have long battled this type of well-meaning but intrusive assistance—for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control. The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over. A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. 


What is the Role of Explainable AI (XAI) In Security?

XAI in cybersecurity is like a colleague who never stops working. While AI helps automatically detect and respond to rapidly evolving threats, XAI helps security professionals understand how these decisions are being made. “Explainable AI sheds light on the inner workings of AI models, making them transparent and trustworthy. Revealing the why behind the models’ predictions, XAI empowers the analysts to make informed decisions. It also enables fast adaptation by exposing insights that lead to quick fine-tuning or new strategies in the face of advanced threats. And most importantly, XAI facilitates collaboration between humans and AI, creating a context in which human intuition complements computational power.,” Kolcsár added. ... With XAI working behind the scenes, security teams can quickly discover the root cause of a security alert and initiate a more targeted response, minimizing the overall damage caused by an attack and limiting resource wastage. As transparency allows security professionals to understand how AI models adapt to rapidly evolving threats, they can also ensure that security measures are consistently effective. 


10 ways AI can make IT more productive

By infusing AI into business processes, enterprises can achieve levels of productivity, efficiency, consistency, and scale that were unimaginable a decade ago, says Jim Liddle, CIO at hybrid cloud storage provider Nasuni. He observes that mundane repetitive tasks, such as data entry and collection, can be easily handled 24/7 by intelligent AI algorithms. “Complex business decisions, such as fraud detection and price optimization, can now be made in real-time based on huge amounts of data,” Liddle states. “Workflows that spanned days or weeks can now be completed in hours or minutes.”  “Enterprises have long sought to drive efficiency and scale through automation, first with simple programmatic rules-based systems and later with more advanced algorithmic software,” Liddle says.  ... “By reducing boilerplating, teams can save time on repetitive tasks while automated and enhanced documentation keeps pace with code changes and project developments.” He notes that AI can also automatically create pull requests and integrate with project management software. Additionally, AI can generate suggestions to resolve bugs, propose new features, and improve code reviews.


How Tomorrow's Smart Cities Will Think For Themselves

When creating a cognitive city, the fundamental need is to move the computing power to where data is generated: where people live, work and travel. That applies whether you’re building a totally new smart city or retrofitting technology to a pre-existing ‘brownfield’ city. Either way, edge is key here. You’re dealing with information from sensors in rubbish bins, drains, and cameras in traffic lights. ... But in years to come the city itself will respond dynamically to the changing physical world, adjusting energy use in real-time to respond to the weather, for example. The evolution of monitoring has come from a machine-to-machine foundation, with the introduction of the Internet of Things (IoT) and now artificial intelligence (AI) becoming transformational in enabling smart technologies to become dynamic. Emerging AI technologies such as large language models will also play a role going forward, making it easy for both city planners and ordinary citizens to interact with the city they live in. Edge will be the key ingredient which gives us effective control of these cities of the future.


Serverless cloud technology fades away

The meaning of serverless computing became diluted over time. Originally coined to describe a model where developers could run code without provisioning or managing servers, it has since been applied to a wide range of services that do not fit its original definition. This led to a confusing loss of precision. It’s crucial to focus on the functional characteristics of serverless computing. The elements of serverless—agility, cost-efficiency, and the ability to rapidly deploy and scale applications—remain valuable. It’s important to concentrate on how these characteristics contribute to achieving business goals rather than becoming fixated on the specific technologies in use. Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds. ... The explosion of generative AI also contributed to the shifting landscape. Cloud providers are deeply invested in enabling AI-driven solutions, which often require specialized computer resources and significant data management capabilities, areas where traditional serverless models may not always excel.


Infrastructure-as-code and its game-changing impact on rapid solutions development

Automation is one of the main benefits of adopting an IaC approach. By automating infrastructure provisioning, IaC allows configuration to be accomplished at a faster pace. Automation also reduces the risk of errors that can result from manual coding, empowering greater consistency by standardizing the development and deployment of the infrastructure. ... Developers can rapidly assemble and deploy its infrastructure blocks, reusing them as needed throughout the development process. When adjustments are needed, developers can simply update the code the blocks are built on rather than making manual one-off changes to infrastructure components. Testing and tracking are more streamlined with IaC since the IaC code serves as a centralized and readily accessible source for documentation on the infrastructure. It also streamlines the testing process, allowing for automated unit testing of compliance, validation, and other processes before deploying. Additionally, IaC empowers developers to take advantage of the benefits provided by cloud computing. It facilitates direct interaction with the cloud’s exposed API, allowing developers to dynamically provision, manage, and orchestrate resources.


What is Multimodal AI? Here’s Everything You Need to Know

Multimodal AI describes artificial intelligence systems that can simultaneously process and interpret data from various sources such as text, images, audio, and video. Unlike traditional AI models that depend on a single type of data, multimodal AI provides a holistic approach to data processing. ... Although multimodal AI and generative AI share similarities, they differ fundamentally. For instance, generative AI focuses on creating new content from a single type of prompt, such as creating images from textual descriptions. In contrast, multimodal AI processes and understands different sensory inputs, allowing users to input various data types and receive multimodal outputs. ... Multimodal AI represents a significant advancement in the field of artificial intelligence. Therefore, by understanding and leveraging this advanced technology, data scientists and AI professionals can pave the way for more sophisticated, context-aware, and human-like AI systems, ultimately enriching our interaction with technology and the world around us. 


Excel Enthusiast to Supply Chain Innovator – The Journey to Building One of the Largest Analytic Platforms

While ChatGPT has helped raise awareness about AI capabilities, explaining how to integrate AI has presented challenges, especially when managing over 200 different data analytic reports. To address the different uses, Miranda has simplified AI into three categories: rule-based AI, learning AI (machine learning), and generative AI. Generative AI has emerged as the most dynamic tool among the three for executing and recording data analytics. Its versatility and adaptability make it particularly effective in capturing and processing diverse data sets, contributing to more comprehensive analytics outcomes. Miranda says, “People in analytics might not jump out of bed excited to tackle documentation, but it's a critical aspect of our work. Without proper documentation, we risk becoming a single point of failure, which is something we want to avoid.” ... These recordings are then converted into transcripts and securely stored in a containerized environment, streamlining the documentation process while ensuring data security. Because of process automation, Miranda says that the organization generated 240,000 work hours last year, and they anticipate even more this year.



Quote for the day:

"Life is like riding a bicycle. To keep your balance you must keep moving." -- Albert Einstein

Daily Tech Digest - May 08, 2024

The Important Difference Between Generative AI And AGI

Here are the key differences: Capability: Generative AI excels at replication and is adept at producing content based on learned patterns and datasets. It can generate impressive results within its specific scope but doesn't venture beyond its programming. AGI, on the other hand, aims to be a powerhouse of innovation, capable of understanding and creatively solving problems across various fields, much like a human would. Understanding: Generative AI operates without any real comprehension of its output; it uses statistical models and algorithms to predict and generate results based on previous data. AGI, by contrast, would need to develop a genuine understanding of the world around it, making connections and having insights that are currently beyond the reach of any AI system. Application: Today, Generative AI is widely used across industries to enhance human productivity and foster creativity, performing tasks ranging from simple data processing to complex content creation. AGI, however, remains a conceptual goal. 


Top strategies for ensuring data center reliability and uptime in 2024

Robust security measures constitute another cornerstone of data center reliability, safeguarding against both cyber threats and physical intrusions. Cybersecurity protocols should encompass multifaceted defense strategies, including perimeter security, network segmentation, encryption, and intrusion detection systems. Regular vulnerability assessments and penetration testing help identify and remediate potential weaknesses before they can be exploited by malicious actors. Physical security measures, such as access controls, surveillance systems, and environmental monitoring, bolster protection against unauthorized access and environmental hazards. Additionally, robust disaster recovery and business continuity plans should be in place to ensure swift recovery in the event of a security breach or natural disaster. Automation and orchestration technologies offer further avenues for enhancing data center reliability by streamlining operations and reducing the risk of human errors.


Reassessing Agile Software Development: Is It Dead or Can It Be Revived?

Why, exactly, do so many folks seem to dislike — and in some cases loathe — agile software development? There's no simple answer, but common themes include:Lack of specificity: A lot of complaints about agile emphasize that the concept is too high-level. As a result, actually implementing agile practices can be confusing because it's rarely clear exactly how to put agile into practice. Plus, the practices tend to vary significantly from one organization to another. Unrealistic expectations: Some agile critics suggest that the concept leads to unrealistic expectations — particularly from managers who think that as long as a development team embraces agile, it should be able to release features very quickly. In reality, even the best-planned agile practices can't guarantee that software projects will always stick to intended timelines. Misuse of the term "agile": In some cases, developers complain that team leads or managers slap the "agile" label on software projects even though few practices actually align with the agile concept. In other words, the term has ended up being used very broadly, in ways that make it hard to define what even counts as agile.


An Architect’s Competing Narratives

The biggest objection to basing architecture on the traditional EA narrative is that it is not transformation focused. The ivory-tower analogy has stuck permanently in this space. The other architects in a practice are often quite vocal in their difficulty with top-down control concepts that are necessary in the enterprise architect mindset. There are not enough of them to cover all the places they need to be. Their skills atrophy and theirfore they lose the ability to critic others work. They often feel connected to their scope as if it is seniority or authority. That connection with ‘Enterprise’ or ‘Domain’ sometimes causes conflict especially if they have not personally delivered a solution or outcome in a long time. In addition the scope based titles seem to interact poorly with other leadership roles both in IT and business as there is not clear ownership. Another type of challenge has emerged in the last ten to fifteen years. You can think of this as the ‘pure EA’ or ‘whole EA’ challenge. Effectively a group of practitioners and writers are regularly pointing to technology skilled EAs and calling them IT EAs and using that to minimize the value of those practitioners.


Exploring generative AI's impact on ethics, data privacy and collaboration

Implementing GenAI presents organizations with multifaceted challenges, particularly data privacy and security. The accuracy of GenAI outputs and the responsibility of organizations and employees to ensure that the outputs are representative and accurate – are also significant challenges. Governance, transparency, and the presence of unexpected biases are additional hurdles. Concerns range from accidentally breaching intellectual property and copyright by sharing data in an unlicensed or unvetted tool to the potential for privacy breaches and cybersecurity threats that GenAI can exacerbate. This data may contain private information about people, sensitive business use cases, or health care data. Unauthorized access to or inappropriate disclosure of these types of data can cause harm to individuals or organizations. While privacy and security were previously associated with intellectual property (IP) and cybersecurity, the definition and scope have expanded in recent years.to encompass data access management, data localization and the rights of data subjects.


AI chip shortages continue, but there may be an end in sight

The breakneck pace of AI adoption over the past two years has strained the industry’s ability to supply the special high-performance chips needed to run the process-intensive operations of genAI and AI in general. Most of the focus on processor shortages has been on the exploding demand for Nvidia GPUs and alternatives from various chip designers such as AMD, Intel, and the hyperscale datacenter operators, according to Benjamin Lee ... Nvidia is tackling the GPU supply shortage by increasing its CoWoS and HBM production capacities, according to TrendForce. “This proactive approach is expected to cut the current average delivery time of 40 weeks in half by the second quarter [of 2024], as new capacities start to come online,” TrendForce report said in its report. ... On the software side of the equation, LLM creators are also developing smaller models tailored for specific tasks; they require fewer processing resources and rely on local, proprietary data — unlike the massive, amorphous algorithms that boast hundreds of billions or even more than a trillion parameters.


CDOs’ biggest problem? Getting colleagues to understand their role

One reason the role may be misunderstood, the report says, is because it’s relatively new. The CDO position first gained momentum around 2008, to ensure data quality and transparency to comply with regulations following the housing credit crisis of that era. The CDO role also lacks a standard list of responsibilities, potentially adding to the confusion, note the report’s authors Thomas H. Davenport, Randy Bean, and Richard Wang. One possible definition of the CDO is the organization’s leader responsible for data governance and use, including data analysis, mining, and processing. In many cases, CDOs focus on business objectives, but in other cases, they have equal business and technology remits, according to the authors. ... “The role runs the gamut from being a very traditional IT-focused role that is oriented to the management of data, to one that resides in the business and is focused on the application of data to create value,” he says. “I anticipate that we will see the role solidify over the coming years, with a bias to value creation.” 


Open Source Is at a Crossroads

Struggles in open source communities undoubtedly stem from the greater economic climate. The start of the current decade saw a low interest rate environment, which Lorenc credits as ushering in a massive boom in the number of open source companies and projects. But now, we are experiencing significant realignment. “Time and money are even more scarce, making it harder for contributors or companies to allocate resources,” he said. “Many, but not all, open source businesses are at a crossroads,” said Fermyon CEO Matt Butcher. For ages, the theory was that you built an open source tool, established a community and then figured out how to monetize it. But now, the companies in that final stage are under immense pressure to increase profit, he said. “For some, that means abandoning the open source model.” A lack of resources to justify open source may also stem from a “plethora of riches” problem, explains Chris Aniszczyk, the chief technology officer of CNCF. With so many projects vying for attention, it’s easier than ever for innovative projects to lose out on the resources they require. 


Can NIS2 and DORA improve firms’ cybersecurity

One of the biggest issues with both NIS2 and DORA is the fact that they overly focus on promoting security and resilience without providing end-users with a blueprint for success. In paying too much attention to the outcomes that enterprises should be working towards, they fail to offer clear step-by-step guidance on the actions that businesses should take to reach those end goals. This is in part due to a recognition that every business is different. With each individual organisation having a better understanding of its own unique digital footprints, the belief is that it makes more sense for enterprises to interpret the guidelines in a way that makes sense for them. This is very much the case with DORA, where enterprises shoulder the responsibility of not only defining what qualifies as a business-critical service but also pinpointing its interconnected dependencies. Unfortunately, allowing regulations to remain open to interpretation in this manner can lead to confusion and inconsistencies, creating complexity to the environment for both organisations and auditors.


Trusted Data Access and Sharing — Why Automation Is the Key to Achieving Value from Data Democratization

As organizations endeavor to democratize data access and empower individuals across the enterprise, the last mile of data delivery emerges as the critical phase in the journey. This final stretch represents an opportunity to share data and data products responsibly, helping to ensure that insights reach the right users when needed and with appropriate context. However, achieving individualized data access poses significant challenges and concerns that need addressing to realize the potential of data democratization. The last mile of data delivery is where organizations must contend with a wide range of variables, including specific user requirements, use cases, rules, and contextual nuances that inform policies for granting conditional entitlements to access the data responsibly.  ... The controls on sharing data need to be as granular as possible about who is requesting access and under what conditions to justify the data types for provisioning. However, the traditional manual approach to last-mile data delivery that requires negotiating with data consumers to understand their needs is a significant roadblock to democratization.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - April 30, 2024

Transform Data Leadership: Core Skills for Chief Data Officers in 2024

The chief data officer is tasked not only with unlocking the value of data but explaining the importance of data as the lifeblood of the company across all levels. They must be effective storytellers who can interpret data in such a way that business stakeholders take notice. An effective CDO pairs storytelling with the supporting data and makes it easy to share insights with stakeholders and get their buy-in. For instance, how effective a CDO is in getting departmental buy-in might boil down to the ongoing department "showback reports" they can produce. "Credibility and integrity are two other important traits in addition to effective communication skills, as it is crucial for the CDO to gain the trust of their peers," Subramanian says. ... To garner support for data initiatives and drive organizational buy-in, chief data officers must be able to communicate complex data concepts in a clear and compelling manner to diverse stakeholders, including executives, business leaders, and technical teams. "CDOs have to serve as a bridge between the tech and operational aspects of the organization as they work to drive business value and increase data literacy and awareness," says Schwenk.


Mind-bending maths could stop quantum hackers, but few understand it

The task of cracking much current online security boils down to the mathematical problem of finding two numbers that, when multiplied together, produce a third number. You can think of this third number as a key that unlocks the secret information. As this number gets bigger, the amount of time it takes an ordinary computer to solve the problem becomes longer than our lifetimes. Future quantum computers, however, should be able to crack these codes much more quickly. So the race is on to find new encryption algorithms that can stand up to a quantum attack. ... Most lattice-based cryptography is based on a seemingly simple question: if you hide a secret point in such a lattice, how long will it take someone else to find the secret location starting from some other point? This game of hide and seek can underpin many ways to make data more secure. A variant of the lattice problem called “learning with errors” is considered to be too hard to break even on a quantum computer. As the size of the lattice grows, the amount of time it takes to solve is believed to increase exponentially, even for a quantum computer.


Secure by Design: UK Enforces IoT Device Cybersecurity Rules

The connected-device law kicks in following repeat attacks against devices with known or easily guessable passwords, which have led to repeat distributed denial-of-service attacks that have affected major institutions, including the BBC as well as major U.K. banks such as Lloyds and the Royal Bank of Scotland. Officials said the law is designed not just for consumer protection but also to improve national cybersecurity resilience, including against malware that targets IoT devices, such as Mirai and its spinoffs, all of which can exploit default passwords in devices. Western officials have also warned that Chinese and Russian nation-state hacking groups exploit known vulnerabilities in consumer-grade network devices. U.S. authorities earlier this year disrupted a Chinese botnet used by a group tracked as Volt Typhoon, warning that Beijing threat actors used infected small office and home office routers to cloak their hacking activities. "It's encouraging to see growing emphasis on implementing best practices in securing IoT devices before they leave the factory," said Kevin Curran, a professor of cybersecurity at Ulster University in Northern Ireland.


Will New Government Guidelines Spur Adoption of Privacy-Preserving Tech?

“The risks of AI are real, but they are manageable when thoughtful governance practices are in place as enablers, not obstacles, to responsible innovation,” Dev Stahlkopf, Cisco’s executive VP and chief legal officer, said in the report. One of the big potential ways to benefit from privacy-preserving technology is enabling multiple parties to share their most valuable and sensitive data, but do so in a privacy-preserving manner. “My data alone is good,” Hughes said. “My data plus your data is better, because you have indicators that I might not see, and vice versa. Now our models are smarter as a result.” Carmakers could benefit by using privacy-preserving technology to combine sensor data collected from engines. “I’m Mercedes. You’re Rolls-Royce. Wouldn’t it be great if we combined our engine data to be able to build a model on top of that could identify and predict maintenance schedules better and therefore recommend a better maintenance schedule?” Hughes said. Privacy-preserving tech could also improve public health through the creation of precision medicine techniques or new medications. 


GQL: A New ISO Standard for Querying Graph Databases

The basis for graph computing is the property graph, which is superior in describing dynamically changing data. Graph databases have been widely used for decades, and only recently, the form has generated new interest in being a pivotal component in Large Language Model-based Generative AI apps. A graph model can visualize complex, interconnected systems. The downside of LLMs is that they are black boxes of a sort, Rathle explained. “There’s no way to understand the reasoning behind the language model. It is just following a neural network and doing it’s doing its thing,” he said. A knowledge graph can serve as external memory, a way to visualize how the LLM constructed its worldview. “So I can trace through the graph and see why it arrived with that answer,” Rathle said. Graph databases are also widely used in the health care companies for drug discovery and by aircraft and other manufacturers as a way to visualize complex system design, Rathle said. “You have all these cascading dependencies and that calculation works really well in the graph,” Rathle said. 


Microsoft deputy CISO says gen AI can give organizations the upper hand

One of the major promises is a reduction in the fraud that often occurs around clinical trials. Bad actors have a vested interest in whether the drug will pass FDA inspections – literally. In other words, falsified results and insider trading is a big risk. Applying AI-powered security to their operational technology in the manufacturing plant or lab can monitor the equipment and not only detect signs of failure, but alert the company to potential tampering. At the same time, they’re also looking at ways to improve drug and polymer research. “They build better products and shorten the cycle of go-to-market for drugs. That’s worth billions of dollars,” he said. “They have a patent that only lasts 10 years. If they can get to market faster, they can hold on to that market share more before it goes to the public.” But that transformation of the SOC is potentially the most impactful of use cases, especially as cybercriminals adopt generative AI and go to work without the guardrails that encumber organizations. “We’ve seen a dramatic adoption of what I would call open-source AI from attackers to be able to use and build models,” Bissell said. 


What IT leaders need to know about the EU AI Act

Lawyers and other observers of the EU AI Act point to a couple major issues that could trip up CIOs. First, the transparency rules could be difficult to comply with, particularly for organizations that don’t have extensive documentation about their AI tools or don’t have a good handle on their internal data. The requirements to monitor AI development and use will add governance obligations for companies using both high-risk and general-purpose AIs. Secondly, although parts of the EU AI Act wouldn’t go into effect until two years after the final passage, many of the details affecting regulations have yet to be written. In some cases, regulators don’t have to finalize the rules until six months before the law goes into effect. The transparency and monitoring requirements will be a new experience for some organizations, Domino Data Lab’s Carlsson says. “Most companies today face a learning curve when it comes to capabilities for governing, monitoring, and managing the AI lifecycle,” he says. “Except for the most advanced AI companies or in heavily regulated industries like financial services and pharma, governance often stops with the data.”


Standard Chartered CEO on why cybersecurity has become a 'disproportionately huge topic' at board meetings

Working together with the CISO team and the risk cyber team as well — of course, they're technical experts themselves, and I think they’ve done an excellent job of putting the technical program around our own defense mechanisms. But probably the most challenging thing was to get the broad business — the people who weren't technical experts — to understand what role they have to play in our cybersecurity defenses. ... It was really interesting to go through that exercise early on and see how unfamiliar some of the business heads were with what their own crown jewels were. Once you identify and are clear on what the crown jewel is, then you have to be part of the defense mechanism around those. Each one of these things costs money or reduces flexibility or, if done incorrectly, impacts the customer experience. Really working through that and getting them involved in structuring their business around the cyber risk, it's an ongoing process, I don't think we'll ever be done with that. We've made huge progress in the past six, seven years, I'd say, as cyber risks have increased. 


The 5 components of a DataOps architecture

If the data comes from multiple sources with timestamps, you can blend it into one database. Use the aggregated data to produce summarized data at various levels of granularity, such as daily aggregate reports. Databases can produce satellite tables, link them together via keys and automatically update them. For example, user ID is the key for personal information in a user transactions table. If the data is in a standardized field, use metadata to indicate the type of each field -- text, integer and category -- labels and size. ... DataOps is not just about the acquisition and use of business data. Archive rarely used or no longer relevant data, such as inactive accounts. Some satellite tables may need regular updates, such as lists of blocklisted IP addresses. Back up data regularly, and update lists of users who can access it. Data teams should decide the best schedule for the various updates and tests to guarantee continuous data integrity. The DataOps process may seem complex, but chances are you already have plenty of data and some of the components in place. Adding security layers may even be easy.


Kubernetes: Future or Fad?

If you asked somebody at the beginning of containerization, it was all about how containers have to be stateless, and what we do today. We deploy databases into Kubernetes, and we love it. Cloud-nativeness isn’t just stateless anymore, but I’d argue a good one-third of the container workloads may be stateful today (with ephemeral or persistent state), and it will keep increasing. The beauty of orchestration, automatic resource management, self-healing infrastructure, and everything in between is just too incredible to not use for “everything.” Anyhow, whatever happens to Kubernetes itself (maybe it will become an orchestration extension of the OCI?!), I think it will disappear from the eyes of the users. It (or its successor) will become the platform to build container runtime platforms. But to make that happen, debug features need to be made available. At the moment you have to look way too deep into Kubernetes or agent logs to find out and fix issues. The one who never had to find out why a Let’s Encrypt certificate isn’t updating may raise a hand now. To bring it to a close, Kubernetes certainly isn’t a fad, but I strongly hope it's not going to be our future either. At least not in its current incarnation.



Quote for the day:

''The best preparation for tomorrow is doing your best today.'' -- H. Jackson Brown, Jr.