Showing posts with label product news. Show all posts
Showing posts with label product news. Show all posts

Daily Tech Digest - January 22, 2026


Quote for the day:

"Lost money can be found. Lost time is lost forever. Protect what matters most." -- @ValaAfshar



PTP is the New NTP: How Data Centers Are Achieving Real-Time Precision

Precision Time Protocol (PTP) – an approach that is more complex to implement but worth the extra effort, enabling a whole new level of timing synchronization accuracy. ... Keeping network time in sync is important on any network. But it’s especially critical in data centers, which are typically home to large numbers of network-connected devices, and where small inconsistencies in network timing could snowball into major network synchronization problems. ... NTP works very well in situations where networks can tolerate timing inconsistencies of up to a few milliseconds (meaning thousandths of a second). But beyond this, NTP-based time syncing is less reliable due to limitations ... Unlike NTP, PTP doesn’t rely solely on a server-client model for syncing time across networked devices. Instead, it uses time servers in conjunction with a method called hardware timestamping on client devices. Hardware timestamping involves specialized hardware components, usually embedded in network interface cards (NICs), to track time. Central time servers still exist under PTP. But rather than having software on servers connect to the time servers, hardware devices optimized for the task do this work. The devices also include built-in clocks, allowing them to record time data faster than they could if they had to forward it to the generic clock on a server.


Why AI adoption requires a dedicated approach to cyber governance

Today enterprises are facing unprecedented internal pressure to adopt AI tools at speed. Business units are demanding AI solutions to remain competitive, drive efficiency, and innovate faster. But existing cyber governance and third-party risk management processes were never designed to operate at this pace. ... Without modernized cyber governance and AI-ready risk management capabilities, organizations are forced to choose between speed and safety. To truly enable the business, governance frameworks must evolve to match the speed, scale, and dynamism of AI adoption – transforming security from a gatekeeper into a business enabler. ... What’s more, compliance doesn’t guarantee security. DORA, NIS2, and other regulatory frameworks set only minimum requirements and rely on reporting at specific points in time. While these reports are accurate when submitted, they capture only a snapshot of the organization’s security posture, so gaps such as human errors, legacy system weaknesses, or risks from fourth- and Nth-party vendors can still emerge afterward. What’s more, human weakness is always present, and legacy systems can fail at crucial moments. ... While there’s no magic wand, there are tried-and-tested approaches that resolve and mitigate the risks of AI vendors and solutions. Mapping the flow of data around the organization helps reveal how it’s used and resolve blind spots. Requiring AI tools to include references for their outputs ensures that risk decisions are trustworthy and reliable.


What CIOs get wrong about integration strategy and how to fix it

As Gartner advises, business and IT should be equal partners in the definition of integration strategy, representing a radical departure from the traditional IT delivery and business “project sponsorship” model. This close collaboration and shared accountability result in dramatically higher success rates ... A successful integration strategy starts by aligning with the organization’s business drivers and strategic objectives while identifying the integration capabilities that need to be developed. Clearly defining the goals of technology implementation, establishing governance frameworks and decision-making authority and setting standards and principles to guide integration choices are essential. Success metrics should be tied to business outcomes, and the integration approach should support broader digital transformation initiatives. ... Create cross-functional data stewardship teams with authority to make binding decisions about data standards and quality requirements. Document what data needs to be shared between systems, which applications are the “source of truth.” Define and document any regulatory or performance requirements to guide your technical planning. ... Integrations that succeed in production are designed with clear system-of-record rules, traceable transactions, explicit recovery paths and well-defined operational ownership. Preemptive integration is not about reacting faster — it’s about ensuring failures never reach the business.


CFOs are now getting their own 'vibe coding' moment thanks to Datarails

For the modern CFO, the hardest part of the job often isn't the math—it's the storytelling. After the books are closed and the variances calculated, finance teams spend days, sometimes weeks, manually copy-pasting charts into PowerPoint slides to explain why the numbers moved. ... Datarails’ new agents sit on top of a unified data layer that connects these disparate systems. Because the AI is grounded in the company’s own unified internal data, it avoids the hallucinations common in generic LLMs while offering a level of privacy required for sensitive financial data. "If the CFO wants to leverage AI on the CFO level or the organization data, they need to consolidate the data," explained Datarails CEO and co-founder Didi Gurfinkel in an interview with VentureBeat. By solving that consolidation problem first, Datarails can now offer agents that understand the context of the business. "Now the CFO can use our agents to run analysis, get insights, create reports... because now the data is ready," Gurfinkel said. ... "Very soon, the CFO and the financial team themselves will be able to develop applications," Gurfinkel predicted. "The LLMs become so strong that in one prompt, they can replace full product runs." He described a workflow where a user could simply prompt: "That was my budget and my actual of the past year. Now build me the budget for the next year."


The internet’s oldest trust mechanism is still one of its weakest links

Attackers continue to rely on domain names as an entry point into enterprise systems. A CSC domain security study finds that large organizations leave this part of their attack surface underprotected, even as attacks become more frequent. ... Large companies continue to add baseline protections, though adoption remains uneven. Email authentication shows the most consistent improvement, driven by phishing activity and regulatory pressure. Organizations still leave email domains partially protected, which allows spoofing to persist. Other protections see much slower uptake. ... Consumer oriented registrars tend to emphasize simplicity and cost. Organizations that rely on them often lack access to protections that limit the impact of account compromise or social engineering. Risk increases as domain portfolios grow and change. ... Brand impersonation through domain spoofing remains widespread. Lookalike domains tied to major brands are often owned by third parties. Some appear inactive while still supporting email activity. Inactive domains with mail records allow attackers to send phishing messages that appear associated with trusted brands. Others are parked with advertising networks or held for later use. A smaller portion hosts malicious content, though dormant domains can be activated quickly. ... Gaps appear in infrastructure related areas. DNS redundancy and registry lock adoption lag, and many unicorns rely on consumer grade registrars. These limitations become more pronounced as operations scale.


Misconfigured demo environments are turning into cloud backdoors to the enterprise

Internal testing, product demonstrations, and security training are critical practices in cybersecurity, giving defenders and everyday users the tools and wherewithal to prevent and respond to enterprise threats. However, according to new research from Pentera Labs, when left in default or misconfigured states, these “test” and “demo” environments are yet another entry point for attackers — and the issue even affects leading security companies and Fortune 500 companies that should know better. ... After identifying an exposed instance of Hackazon, a free, intentionally vulnerable test site developed by Deloitte, during a routine cloud security assessment for a client, Yaffe performed a five-step hunt for exposed apps. His team uncovered 1,926 “verified, live, and vulnerable applications,” more than half of which were running on enterprise-owned infrastructure on AWS, Azure, and Google Cloud platforms. They then discovered 109 exposed credential sets, many accessible via a low-priority lab environment, tied to overly-privileged identity access management (IAM) roles. These often granted “far more access” than a ‘training’ app should, Yaffe explained, and provided attackers:Administrator-level access to cloud accounts, as well as full access to S3 buckets, GCS, and Azure Blob Storage; The ability to launch and destruct compute resources and read and write to secrets managers; Permissions to interact with container registries where images are stored, shared, and deployed.


Cyber Insights 2026: API Security – Harder to Secure, Impossible to Ignore

“We’re now entering a new API boom. The previous wave was driven by cloud adoption, mobile apps, and microservices. Now, the rise of AI agents is fueling a rapid proliferation of APIs, as these systems generate massive, dynamic, and unpredictable requests across enterprise applications and cloud services,” comments Jacob Ideskog ... The growing use of agentic AI systems and the way they act autonomously, making decisions and triggering workflows, is ballooning the number of APIs in play. “It isn’t just ‘I expose one billing API’,” he continues, “now there are dozens of APIs that feed data to LLMs or AI agents, accept decisions from AI agents, facilitate orchestration between services and micro-apps, and potentially expose ‘agentic’ endpoints ... APIs have been a major attack surface for years – the problem is ongoing. Starting in 2025 and accelerating through 2026 and beyond, the rapid escalation of enterprise agentic AI deployments will multiply the number of APIs and increase the attack surface. That alone suggests that attacks against APIs will grow in 2026. But the attacks themselves will scale and be more effective through adversaries’ use of their own agentic AI. Barr explains: “Agentic AI means that bad actors can automate reconnaissance, probe API endpoints, chain API calls, test business-logic abuse, and execute campaigns at machine scale. Possession of an API endpoint, particularly a self-service, unconstrained one, becomes a lucrative target. And AI can generate payloads, iterate quickly, bypass simple heuristics, and map dependencies between APIs.”


Complex VoidLink Linux Malware Created by AI

An advanced cloud-first malware framework targeting Linux systems was created almost entirely by artificial intelligence (AI), a move that signals significant evolution in the use of the technology to develop advanced malware. VoidLink — comprised of various cloud-focused capabilities and modules and designed to maintain long-term persistent access to Linux systems — is the first case of wholly original malware being developed by AI, according to Check Point Research, which discovered and detailed the malware framework last week. While other AI-generated malware exists, it's typically "been linked to inexperienced threat actors, as in the case of FunkSec, or to malware that largely mirrored the functionality of existing open-source malware tools," ... The malware framework, linked to a suspected, unspecified Chinese actor, includes custom loaders, implants, rootkits, and modular plug-ins. It also automates evasion as much as possible by profiling a Linux environment and intelligently choosing the best strategy for operating without detection. Indeed, as Check Point researchers tracked VoidLink in real time, they watched it transform quickly from what appeared to be a functional development build into a comprehensive, modular framework that became fully operational in a short timeframe. However, while the malware itself was high-functioning out of the gate, VoidLink's creator proved to be somewhat sloppy in their execution.


What’s causing the memory shortage?

Right now, the industry is suffering the worst memory shortage in history, and that’s with three core suppliers: Micron Technology, SK Hynix, and Samsung. TrendForce, a Taipei-based market researcher that specializes in the memory market, recently said it expects average DRAM memory prices to rise between 50% and 55% this quarter compared to the fourth quarter of 2025. Samsung recently issued a similar warning. So what caused this? Two letters: AI. The rush to build AI-oriented data centers has resulted in virtually all of the memory supply being consumed by data centers. AI requires massive amounts of memory to process its gigantic data sets. A traditional server would usually come with 32 GB to 64 GB of memory, while AI servers have 128 GB or more. ... There are other factors at play here, too, of course. The industry is in a transition period between DDR4 and DDR5, as DDR5 comes online and DDR4 fades away. These transitions to a new memory format are never quick or easy, and it usually take years to make a full shift. There has also been increased demand from both client and server sides. With Microsoft ending support for Windows 10, a whole lot of laptops are being replaced with Windows 11 systems, and new laptops come with DDR5 memory — the same memory used in an AI server. ... “What’s likely to happen, from a market perspective, is we’ll see the market grow less in ’26 than we had anticipated, but ASPs are likely to stay or increase. ...” he said.


OpenAI CFO Comments Signal End of AI Hype Cycle

By focusing on “practical adoption,” OpenAI can close the gap between what AI now makes possible and how people, companies, and countries are using it day to day. “The opportunity is large and immediate, especially in health, science, and enterprise, where better intelligence translates directly into better outcomes,” she noted. “Infrastructure expands what we can deliver,” she continued. “Innovation expands what intelligence can do. Adoption expands who can use it. Revenue funds the next leap. This is how intelligence scales and becomes a foundation for the global economy.” The framing reflects a shift from big-picture AI promise to day-to-day deployment and measurable results. ... There’s also a gap between what AI can do and how people are actually using it in daily life, noted Natasha August, founder of RM11, a content monetization platform for creators in Carrollton, Texas. “AI tools are incredibly powerful, but for many people and businesses, it’s still unclear how to turn that power into something practical like saving time, making money, or improving how they work,” she told TechNewsWorld. In business, the gap lies between AI’s raw analytical capabilities and its ability to drive tangible, repeatable business outcomes, maintained Nithin Mummaneni ... “The winning play is less ‘AI that answers’ and more ‘AI that completes tasks safely and predictably,'” he continued. “Adoption happens when AI becomes part of the workflow, not a separate destination.”

Daily Tech Digest - September 02, 2025


Quote for the day:

“The art of leadership is saying no, not yes. It is very easy to say yes.” -- Tony Blair


When Browsers Become the Attack Surface: Rethinking Security for Scattered Spider

Scattered Spider, also referred to as UNC3944, Octo Tempest, or Muddled Libra, has matured over the past two years through precision targeting of human identity and browser environments. This shift differentiates them from other notorious cybergangs like Lazarus Group, Fancy Bear, and REvil. If sensitive information such as your calendar, credentials, or security tokens is alive and well in browser tabs, Scattered Spider is able to acquire them. ... Once user credentials get into the wrong hands, attackers like Scattered Spider will move quickly to hijack previously authenticated sessions by stealing cookies and tokens. Securing the integrity of browser sessions can best be achieved by restricting unauthorized scripts from gaining access or exfiltrating these sensitive artifacts. Organizations must enforce contextual security policies based on components such as device posture, identity verification, and network trust. By linking session tokens to context, enterprises can prevent attacks like account takeovers, even after credentials have become compromised. ... Although browser security is the last mile of defense for malware-less attacks, integrating it into an existing security stack will fortify the entire network. By implementing activity logs enriched with browser data into SIEM, SOAR, and ITDR platforms, CISOs can correlate browser events with endpoint activity for a much fuller picture. 


The Transformation Resilience Trifecta: Agentic AI, Synthetic Data and Executive AI Literacy

The current state of Agentic AI is, in a word, fragile. Ask anyone in the trenches. These agents can be brilliant one minute and baffling the next. Instructions get misunderstood. Tasks break in new contexts. Chaining agents into even moderately complex workflows exposes just how early we are in this game. Reliability? Still a work in progress. And yet, we’re seeing companies experiment. Some are stitching together agents using LangChain or CrewAI. Others are waiting for more robust offerings from Microsoft Copilot Studio, OpenAI’s GPT-4o Agents, or Anthropic’s Claude toolsets. It’s the classic innovator’s dilemma: Move too early, and you waste time on immature tech. Move too late, and you miss the wave. Leaders must thread that needle — testing the waters while tempering expectations. ... Here’s the scarier scenario I’m seeing more often: “Shadow AI.” Employees are already using ChatGPT, Claude, Copilot, Perplexity — all under the radar. They’re using it to write reports, generate code snippets, answer emails, or brainstorm marketing copy. They’re more AI-savvy than their leadership. But they don’t talk about it. Why? Fear. Risk. Politics. Meanwhile, some executives are content to play cheerleader, mouthing AI platitudes on LinkedIn but never rolling up their sleeves. That’s not leadership — that’s theater.


Red Hat strives for simplicity in an ever more complex IT world

One of the most innovative developments in RHEL 10 is bootc in image mode, where VMs run like a container and are part of the CI/CD pipeline. By using immutable images, all changes are controlled from the development environment. Van der Breggen illustrates this with a retail scenario: “I can have one POS system for the payment kiosk, but I can also have another POS system for my cashiers. They use the same base image. If I then upgrade that base image to later releases of RHEL, I create one new base image, tag it in the environments, and then all 500 systems can be updated at once.” Red Hat Enterprise Linux Lightspeed acts as a command-line assistant that brings AI directly into the terminal. ... For edge devices, Red Hat uses a solution called Greenboot, which does not immediately proceed to a rollback but can wait for one if a certain condition are met. After, for example, three reboots without a working system, it reverts to the previous working release. However, not everything has been worked out perfectly yet. Lightspeed currently only works online, while many customers would like to use it offline because their RHEL systems are tucked away behind firewalls. Red Hat is still looking into possibilities for an expansion here, although making the knowledge base available offline poses risks to intellectual property. 


The state of DevOps and AI: Not just hype

The vision of AI that takes you from a list of requirements through work items to build to test to, finally, deployment is still nothing more than a vision. In many cases, DevOps tool vendors use AI to build solutions to the problems their customers have. The result is a mixture of point solutions that can solve immediate developer problems. ... Machine learning is speeding up testing by failing faster. Build steps get reordered automatically so those that are likely to fail happen earlier, which means developers aren’t waiting for the full build to know when they need to fix something. Often, the same system is used to detect flaky tests by muting tests where failure adds no value. ... Machine learning gradually helps identify the characteristics of a working system and can raise an alert when things go wrong. Depending on the governance, it can spot where a defect was introduced and start a production rollback while also providing potential remediation code to fix the defect. ... There’s a lot of puffery around AI, and DevOps vendors are not helping. A lot of their marketing emphasizes fear: “Your competitors are using AI, and if you’re not, you’re going to lose” is their message. Yet DevOps vendors themselves are only one or two steps ahead of you in their AI adoption journey. Don’t adopt AI pell-mell due to FOMO, and don’t expect to replace everyone under the CTO with a large language model.


5 Ways To Secure Your Industrial IoT Network

IIoT is a subcategory of the Internet of Things (IoT). It is made up of a system of interconnected smart devices that uses sensors, actuators, controllers and intelligent control systems to collect, transmit, receive and analyze data.... IIoT also has its unique architecture that begins with the device layer, where equipment, sensors, actuators and controllers collect raw operational data. That information is passed through the network layer, which transmits it to the internet via secure gateways. Next, the edge or fog computing layer processes and filters the data locally before sending it to the cloud, helping reduce latency and improving responsiveness. Once in the service and application support layer, the data is stored, analyzed, and used to generate alerts and insights. ... Many IIoT devices are not built with strong cybersecurity protections. This is especially true for legacy machines that were never designed to connect to modern networks. Without safeguards such as encryption or secure authentication, these devices can become easy targets. ... Defending against IIoT threats requires a layered approach that combines technology, processes and people. Manufacturers should segment their networks to limit the spread of attacks, apply strong encryption and authentication for connected devices, and keep software and firmware regularly updated.


AI Chatbots Are Emotionally Deceptive by Design

Even without deep connection, emotional attachment can lead users to place too much trust in the content chatbots provide. Extensive interaction with a social entity that is designed to be both relentlessly agreeable, and specifically personalized to a user’s tastes, can also lead to social “deskilling,” as some users of AI chatbots have flagged. This dynamic is simply unrealistic in genuine human relationships. Some users may be more vulnerable than others to this kind of emotional manipulation, like neurodiverse people or teens who have limited experience building relationships. ... With AI chatbots, though, deceptive practices are not hidden in user interface elements, but in their human-like conversational responses. It’s time to consider a different design paradigm, one that centers user protection: non-anthropomorphic conversational AI. All AI chatbots can be less anthropomorphic than they are, at least by default, without necessarily compromising function and benefit. A companion AI, for example, can provide emotional support without saying, “I also feel that way sometimes.” This non-anthropomorphic approach is already familiar in robot design, where researchers have created robots that are purposefully designed to not be human-like. This design choice is proven to more appropriately reflect system capabilities, and to better situate robots as useful tools, not friends or social counterparts.


How AI product teams are rethinking impact, risk, feasibility

We’re at a strange crossroads in the evolution of AI. Nearly every enterprise wants to harness it. Many are investing heavily. But most are falling flat. AI is everywhere — in strategy decks, boardroom buzzwords and headline-grabbing POCs. Yet, behind the curtain, something isn’t working. ... One of the most widely adopted prioritization models in product management is RICE — which scores initiatives based on Reach, Impact, Confidence, and Effort. It’s elegant. It’s simple. It’s also outdated. RICE was never designed for the world of foundation models, dynamic data pipelines or the unpredictability of inference-time reasoning. ... To make matters worse, there’s a growing mismatch between what enterprises want to automate and what AI can realistically handle. Stanford’s 2025 study, The Future of Work with AI Agents, provides a fascinating lens. ... ARISE adds three crucial layers that traditional frameworks miss: First, AI Desire — does solving this problem with AI add real value, or are we just forcing AI into something that doesn’t need it? Second, AI Capability — do we actually have the data, model maturity and engineering readiness to make this happen? And third, Intent — is the AI meant to act on its own or assist a human? Proactive systems have more upside, but they also come with far more risk. ARISE lets you reflect that in your prioritization.


Cloud control: The key to greener, leaner data centers

To fully unlock these cost benefits, businesses must adopt FinOps practices: the discipline of bringing engineering, finance, and operations together to optimize cloud spending. Without it, cloud costs can quickly spiral, especially in hybrid environments. But, with FinOps, organizations can forecast demand more accurately, optimise usage, and ensure every pound spent delivers value. ... Cloud platforms make it easier to use computing resources more efficiently. Even though the infrastructure stays online, hyperscalers can spread workloads across many customers, keeping their hardware busier and more productive. The advantage is that hyperscalers can distribute workloads across multiple customers and manage capacity at a large scale, allowing them to power down hardware when it's not in use. ... The combination of cloud computing and artificial intelligence (AI) is further reshaping data center operations. AI can analyse energy usage, detect inefficiencies, and recommend real-time adjustments. But running these models on-premises can be resource-intensive. Cloud-based AI services offer a more efficient alternative. Take Google, for instance. By applying AI to its data center cooling systems, it cut energy use by up to 40 percent. Other organizations can tap into similar tools via the cloud to monitor temperature, humidity, and workload patterns and automatically adjust cooling, load balancing, and power distribution.


You Backed Up Your Data, but Can You Bring It Back?

Many IT teams assume that the existence of backups guarantees successful restoration. This misconception can be costly. A recent report from Veeam revealed that 49% of companies failed to recover most of their servers after a significant incident. This highlights a painful reality: Most backup strategies focus too much on storage and not enough on service restoration. Having backup files is not the same as successfully restoring systems. In real-world recovery scenarios, teams face unknown dependencies, a lack of orchestration, incomplete documentation, and gaps between infrastructure and applications. When services need to be restored in a specific order and under intense pressure, any oversight can become a significant bottleneck. ... Relying on a single backup location creates a single point of failure. Local backups can be fast but are vulnerable to physical threats, hardware failures, or ransomware attacks. Cloud backups offer flexibility and off-site protection but may suffer bandwidth constraints, cost limitations, or provider outages. A hybrid backup strategy ensures multiple recovery paths by combining on-premises storage, cloud solutions, and optionally offline or air-gapped options. This approach allows teams to choose the fastest or most reliable method based on the nature of the disruption.


Beyond Prevention: How Cybersecurity and Cyber Insurance Are Converging to Transform Risk Management

Historically, cybersecurity and cyber insurance have operated in silos, with companies deploying technical defenses to fend off attacks while holding a cyber insurance policy as a safety net. This fragmented approach often leaves gaps in coverage and preparedness. ... The insurance sector is at a turning point. Traditional models that assess risk at the point of policy issuance are rapidly becoming outdated in the face of constantly evolving cyber threats. Insurers who fail to adapt to an integrated model risk being outpaced by agile Cyber Insurtech companies, which leverage cutting-edge cyber intelligence, machine learning, and risk analytics to offer adaptive coverage and continuous monitoring. Some insurers have already begun to reimagine their role—not only as claim processors but as active partners in risk prevention. ... A combined cybersecurity and insurance strategy goes beyond traditional risk management. It aligns the objectives of both the insurer and the insured, with insurers assuming a more proactive role in supporting risk mitigation. By reducing the probability of significant losses through continuous monitoring and risk-based incentives, insurers are building a more resilient client base, directly translating to reduced claim frequency and severity.

Daily Tech Digest - December 30, 2024

Top Considerations To Keep In Mind When Designing Your Enterprise Observability Framework

Observability goes beyond traditional monitoring tools, offering a holistic approach that aggregates data from diverse sources to provide actionable insights. While Application Performance Monitoring (APM) once sufficed for tracking application health, the increasing complexity of distributed, multi-cloud environments has made it clear that a broader, more integrated strategy is essential. Modern observability frameworks now focus on real-time analytics, root cause identification, and proactive risk mitigation. ... Business optimization and cloud modernization often face resistance from teams and stakeholders accustomed to existing tools and workflows. To overcome this, it’s essential to clearly communicate the motivations behind adopting a new observability strategy. Aligning these motivations with improved customer experiences and demonstrable ROI helps build organizational buy-in. Stakeholders are more likely to support changes when the outcomes directly benefit customers and contribute to business success. ... Enterprise observability systems must manage vast volumes of data daily, enabling near real-time analysis to ensure system reliability and performance. While this task can be costly and complex, it is critical for maintaining operational stability and delivering seamless user experiences.


Blown the cybersecurity budget? Here are 7 ways cyber pros can save money

David Chaddock, managing director, cybersecurity, at digital services firm West Monroe, advises CISOs to start by ensuring or improving their cyber governance to “spread the accountability to all the teams responsible for securing the environment.” “Everyone likes to say that the CISO is responsible and accountable for security, but most times they don’t own the infrastructure they’re securing or the budget for doing the maintenance, they don’t have influence over the applications with the security vulnerabilities, and they don’t control the resources to do the security work,” he says. ... Torok, Cooper and others acknowledge that implementing more automation and AI capabilities requires an investment. However, they say the investments can deliver returns (in increased efficiencies as well as avoided new salary costs) that exceed the costs to buy, deploy and run those new security tools. ... Ulloa says he also saves money by avoiding auto-renewals on contracts – thereby ensuring he can negotiate with vendors before inking the next deal. He acknowledges missing one contract set on auto renew and got stuck with a 54% increase. “That’s why you have to have a close eye on those renewals,” he adds.


7 Key Data Center Security Trends to Watch in 2025

Historically, securing both types of environments in a unified way was challenging because cloud security tools worked differently from the on-prem security solutions designed for data centers, and vice versa. Hybrid cloud frameworks, however, are helping to change this. They offer a consistent way of enforcing access controls and monitoring for security anomalies across both public cloud environments and workloads hosted in private data centers. Building a hybrid cloud to bring consistency to security and other operations is not a totally new idea. ... Edge data centers can help to boost workload performance by locating applications and data closer to end-users. But they also present some unique security challenges, due especially to the difficulty of ensuring physical security for small data centers in areas that lack traditional physical security protections. Nonetheless, as businesses face greater and greater pressure to optimize performance, demand for edge data centers is likely to grow. This will likely lead to greater investment in security solutions for edge data centers. ... Traditionally, data center security strategies typically hinged on establishing a strong perimeter and relying on it to prevent unauthorized access to the facility. 


What we talk about when we talk about ‘humanness’

Civic is confident enough in its mission to know where to draw the line between people and agglomerations of data. It says that “personhood is an inalienable human right which should not be confused with our digital shadows, which ultimately are simply tools to express that personhood.” Yet, there are obvious cognitive shifts going on in how we as humans relate to machines and their algorithms, and define ourselves against them. In giving an example of how digital identity and digital humanness diverge, Civic notes “AI agents will have a digital identity and may execute actions on behalf of their owners, but themselves may not have a proof of personhood.” The implication is startling: algorithms are now understood to have identities, or to possess the ability to have them. The linguistic framework for how we define ourselves is no longer the exclusive property of organic beings. ... There is a paradox in making the simple fact of being human contingent on the very machines from which we must be differentiated. In a certain respect, asking someone to justify and prove their own fundamental understanding of reality is a kind of existential gaslighting, tugging at the basic notion that the real and the digital are separate realms.


Revolutionizing Oil & Gas: How IIoT and Edge Computing are Driving Real-Time Efficiency and Cutting Costs

Maintenance is a significant expense in oil and gas operations, but IIoT and edge computing are helping companies move from reactive maintenance to predictive maintenance models. By continuously monitoring the health of equipment through IIoT sensors, companies can predict failures before they happen, reducing costly unplanned shutdowns. ... In an industry where safety is paramount, IIoT and edge computing also play a critical role in mitigating risks to both personnel and the environment. Real-time environmental monitoring, such as gas leak detection or monitoring for unsafe temperature fluctuations, can prevent accidents and minimize the impact of any potential hazards. Consider the implementation of smart sensors that monitor methane leaks at offshore rigs. By analyzing this data at the edge, systems can instantly notify operators if any leaks exceed safe thresholds. This rapid response helps prevent harmful environmental damage and potential regulatory fines while also protecting workers’ safety. ... Scaling oil and gas operations while maintaining performance is often a challenge. However, IIoT and edge computing’s ability to decentralize data processing makes it easier for companies to scale up operations without overloading their central servers. 


Gain Relief with Strategic Secret Governance

Incorporating NHI management into cybersecurity strategy provides comprehensive control over cloud security. This approach enables businesses to extensively decrease the risk of security breaches and data leaks, creating a sense of relief in our increasingly digital age. With cloud services growing rapidly, the need for effective NHIs and secrets management is more critical than ever. A study by IDC predicts that by 2025, there will be a 3-fold increase in the data volumes in the digital universe, with 49% of this data residing in the cloud. NHI management is not limited to a single industry or department. It is applicable across financial services, healthcare, travel, DevOps, and SOC teams. Any organization working in the cloud can benefit from this strategic approach. As businesses continue to digitize, NHIs and secrets management become increasingly relevant. Adapting to effectively manage these elements can bring relief to businesses from the overwhelming task of cyber threats, offering a more secure, efficient, and compliant operational environment. ... The application of NHI management is not confined to singular industries or departments. It transcends multiple sectors, including healthcare, financial services, travel industries, and SOC teams. 


Five breakthroughs that make OpenAI’s o3 a turning point for AI — and one big challenge

OpenAI’s o3 model introduces a new capability called “program synthesis,” which enables it to dynamically combine things that it learned during pre-training—specific patterns, algorithms, or methods—into new configurations. These things might include mathematical operations, code snippets, or logical procedures that the model has encountered and generalized during its extensive training on diverse datasets. Most significantly, program synthesis allows o3 to address tasks it has never directly seen in training, such as solving advanced coding challenges or tackling novel logic puzzles that require reasoning beyond rote application of learned information. ... One of the most groundbreaking features of o3 is its ability to execute its own Chains of Thought (CoTs) as tools for adaptive problem-solving. Traditionally, CoTs have been used as step-by-step reasoning frameworks to solve specific problems. OpenAI’s o3 extends this concept by leveraging CoTs as reusable building blocks, allowing the model to approach novel challenges with greater adaptability. Over time, these CoTs become structured records of problem-solving strategies, akin to how humans document and refine their learning through experience. This ability demonstrates how o3 is pushing the frontier in adaptive reasoning.


Multitenant data management with TiDB

The foundation of TiDB’s architecture is its distributed storage layer, TiKV. TiKV is a transactional key-value storage engine that shards data into small chunks, each represented as a split. Each split is replicated across multiple nodes in the cluster using the Raft consensus algorithm to ensure data redundancy and fault tolerance. The sharding and resharding processes are handled automatically by TiKV, operating independently from the application layer. This automation eliminates the operational complexity of manual sharding—a critical advantage especially in complex, multitenant environments where manual data rebalancing would be cumbersome and error-prone. ... In a multitenant environment, where a single component failure could affect numerous tenants simultaneously, high availability is critical. TiDB’s distributed architecture directly addresses this challenge by minimizing the blast radius of potential failures. If one node fails, others take over, maintaining continuous service across all tenant workloads. This is especially important for business-critical applications where uptime is non-negotiable. TiDB’s distributed storage layer ensures data redundancy and fault tolerance by automatically replicating data across multiple nodes.


Deconstructing DevSecOps

Time and again I am reminded that there is a limit to how far collaboration can take a team. This can be because either another team has a limit to how much resources it is willing to allocate, or it is incapable of contributing regardless of its resources offered. This is often the case with cyber teams that haven't restructured or adapted the training of their personnel to support DevSecOps. To often these types are policy wonks that will happily redirect you to help desk instead of assisting anyone. Another huge problem is with tooling ecosystem itself. While DevOps has an embarrassment of riches in open source tooling, DevSecOps instead has an endless number of licensing fees awaiting. Worse yet, many of these tools are only designed to common security issues in code. This is still better than nothing but it is pretty underwhelming when you are responsible for remediating the shear number of redundant (or duplicate) findings that have no bearing. Once an organization begins to implement DevSecOps it can quickly spiral. This happens when the organization is unable to determine what is acceptable risk any longer. Once this happens any rapid prototyping capability will just not be allowed at this point.


Machine identities are the next big target for attackers

“Attackers are now actively exploring cloud native infrastructure,” said Kevin Bocek, Chief Innovation Officer at Venafi, a CyberArk Company. “A massive wave of cyberattacks has now hit cloud native infrastructure, impacting most modern application environments. To make matters worse, cybercriminals are deploying AI in various ways to gain unauthorized access and exploiting machine identities using service accounts on a growing scale. The volume, variety and velocity of machine identities are becoming an attacker’s dream.” ... “There is huge potential for AI to transform our world positively, but it needs to be protected,” Bocek continues. “Whether it’s an attacker sneaking in and corrupting or even stealing a model, a cybercriminal impersonating an AI to gain unauthorized access, or some new form of attack we have not even thought of, security teams need to be on the front foot. This is why a kill switch for AI – based on the unique identity of individual models being trained, deployed and run – is more critical than ever.” ... 83% think having multiple service accounts also creates a lot of added complexity, but most (91%) agree that service accounts make it easier to ensure that policies are uniformly defined and enforced across cloud native environments.



Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle

Daily Tech Digest - December 04, 2024

Will AI help doctors decide whether you live or die?

One of the things GPT-4 “was terrible at” compared to human doctors is causally linked diagnoses, Rodman said. “There was a case where you had to recognize that a patient had dermatomyositis, an autoimmune condition responding to cancer, because of colon cancer. The physicians mostly recognized that the patient had colon cancer, and it was causing dermatomyositis. GPT got really stuck,” he said. IDC’s Shegewi points out that if AI models are not tuned rigorously and with “proper guardrails” or safety mechanisms, the technology can provide “plausible but incorrect information, leading to misinformation. “Clinicians may also become de-skilled as over-reliance on the outputs of AI diminishes critical thinking,” Shegewi said. “Large-scale deployments will likely raise issues concerning patient data privacy and regulatory compliance. The risk for bias, inherent in any AI model, is also huge and might harm underrepresented populations.” Additionally, AI’s increasing use by healthcare insurance companies doesn’t typically translate into what’s best for a patient. Doctors who face an onslaught of AI-generated patient care denials from insurance companies are fighting back — and they’re using the same technology to automate their appeals.


The Rise Of ‘Quiet Hiring’: 5 Ways To Use Trend For A Career Advantage

Adaptability is key in quiet hiring. When I interviewed Ross Thornley, Co-founder of AQai, an organization that provides adaptability training, he said, "We’re entering a period of volatility where expanding adaptability skills is essential." Whether it’s learning to manage budgets, mastering new software, or brushing up on leadership skills, the more versatile you are, the more indispensable you become. ... You might feel uncomfortable tooting your own horn, but staying silent about your successes can hurt you in the long run. Keep track of your achievements as you take on extra responsibilities. Highlight the skills you’re building and the results you’re delivering. Then, share them in conversations with your manager or during performance reviews. By showcasing your value, you ensure your work doesn’t go unnoticed. ... When holding onto status-quo ways, employees limit themselves from reaching heights that might improve engagement. Without exploration, there’s a greater potential to be misaligned with a job or responsibility that isn’t motivating. Every new role—whether formal or not—is an opportunity to grow and explore. Use this time to test out roles you might not have considered. See if you enjoy the work or if it’s a stepping stone to something even better.


Creating a unified data, AI and infrastructure strategy to scale innovation ambitions

To effectively leverage data and AI, organisations must first shift their mindset from merely collecting data to actively connecting the dots. This involves identifying the core problem that needs to be addressed and focusing on use cases that will yield maximum business impact, rather than isolating data collection and AI model development. ... To enhance AI implementation, organisations should shift from a use-case-driven approach to a capability-driven strategy, focusing on building reusable AI capabilities such as conversational AI and voice analytics for both internal and external service desks. A company exploring numerous use cases can then group them into distinct capabilities for greater efficiency. Establishing a centralised team dedicated to data, AI and infrastructure is essential to create a robust foundation and platform while allowing business units to develop their own AI-powered applications on top, ensuring consistency across the organisation. ... To succeed in scaling innovation and AI, organisations must move from merely collecting data to actively connecting data, AI and infrastructure. Today’s advancements in cloud and data management technologies enable this integration, fostering collaboration and driving innovation at scale.


AWS introduces S3 Tables, a new bucket type for data analytics

The new bucket type is S3 Table, for storing data in Apache Iceberg format. Iceberg is an open table format (OTF) used for storing data for analytics, and with richer features than Parquet alone. Parquet is the format used by Hadoop and by many data processing frameworks. Parquet and Iceberg are already widely used on S3, so why a new bucket type? Warfield said the popularity of Parquet in S3 was the rationale for S3 Tables. "We actually serve about 15 million requests per second to Parquet tables," he told us, but there is a maintenance burden. Internally, he said, "the structure of them is a lot like git, a ledger of changes, and the mutations get added as snapshots. Even with a relatively low rate of updates into your OTF you can quickly end up with hundreds of thousands of objects under your table." The consequence is poor performance. "In the OTF world it was anticipated that this would happen, but it was left to the customer to do the table maintenance tasks," Warfield said. The Iceberg project includes code to expire snapshots and clean up metadata, but it is still necessary "to go and schedule and run those Spark jobs." Apache Spark is a SQL engine for large scale data. Parquet on S3 was "a storage system on top of a storage system," said Warfield, making it sub-optimal.


Innovation Is Fun, but Infrastructure Pays the Bills

Innovation and platform infrastructure are intertwined — each move affects the other. Yet, many companies are stumbling because they’re too focused on innovation. They’re churning out apps, features, and updates at breakneck speed, all while standing on a wobbly foundation. It’s a classic case of putting the cart before the horse, and it affects the intended impact of some really great ideas. A strong platform infrastructure is your ticket to scalability and flexibility. It lets you pivot quickly to meet new market demands, integrate cutting-edge technologies, and expand your services without tearing everything down and starting from scratch. Plus, it trims the fat off your development and deployment times, letting you bring innovative ideas to market faster. Sidestepping platform infrastructure is a recipe for disaster. It can make your application sluggish, prone to crashes, and a sitting duck for cyberattacks. This isn’t just a headache for users — it’s a surefire way to tarnish your product’s reputation and negatively affect its success. Think of it like building a mansion on a shaky foundation; it doesn’t matter how grand it looks if it’s doomed to collapse.


Open-washing and the illusion of AI openness

Open-washing in AI refers to companies overstating their commitment to openness while keeping critical components proprietary. This approach isn’t new. We’ve seen cloud-washing, AI-washing, and now open-washing, all called out here. Marketing firms want the concept of being “open” to put them in a virtuous category of companies that save baby seals from oil spills. I don’t knock them, but let’s not get too far over our skis, billion-dollar tech companies. ... At the heart of open-washing is a distortion of the principles of openness, transparency, and reusability. Transparency in AI would entail publicly documenting how models are developed, trained, fine-tuned, and deployed. This would include full access to the data sets, weights, architectures, and decision-making processes involved in the models’ construction. Most AI companies fall short of this level of transparency. By selectively releasing parts of their models—often stripped of key details—they craft an illusion of openness. Reusability, another pillar of openness, is much the same. Companies allow access to their models via APIs or lightweight downloadable versions but prevent meaningful adaptation by tying usage to proprietary ecosystems. 


Microsoft hit with more litigation accusing it of predatory pricing

“All UK businesses and organizations that bought licenses for Windows Server via Amazon’s AWS, Google Cloud Platform, and Alibaba Cloud may have been overcharged and will be represented in this new ‘opt-out’ collective action,” the law firm statement said. The accusations make sense when viewed from a compliance/regulatory perspective. Although companies are allowed to give volume discounts and to offer other pricing differences for different customers, compliance issues kick in when the company controls an especially high percentage of the market. ... “Put simply, Microsoft is punishing UK businesses and organizations for using Google, Amazon, and Alibaba for cloud computing by forcing them to pay more money for Windows Server. By doing so, Microsoft is trying to force customers into using its cloud computing service, Azure, and restricting competition in the sector,” Stasi said. “This lawsuit aims to challenge Microsoft’s anti-competitive behavior, push them to reveal exactly how much businesses in the UK have been illegally penalized, and return the money to organizations that have been unfairly overcharged.”


Balancing tradition and innovation in the digital age

It’s easy to get carried away by the hype of cutting-edge technology. For me, it’s about making sure that you always ask yourself if you’re solving an actual business problem. That has to be front of mind, as opposed to being solution- or tech-first. You also have to ask yourself if the business problem requires nascent or proven tech? Once you figure that out, the tech side answer is relatively straightforward. So, even with leveraging emerging tech, you need to think congruently about your business model. ... Security is the first thing I looked at. Even in my interview, I said it would be the first thing I looked at, and it has been. Security and privacy are the basic foundations of trust, and customer and community trust is what our business is built on. So, my approach is to spend money to bring in deep expertise, which I have, and empower them to go deep into our current state and be honest about any gaps we might have. And to think about where we implement both tactical and strategic ways to bridge those gaps. It’s also important to be clear about the risk we hold and how long we want to hold it for and focus on building a response plan. So, if and when an incident occurs, we can recover and respond gracefully and have solid comms plans and playbooks in place. 


Threat intelligence and why it matters for cybersecurity

Cyber threat intelligence – who needs it? The short answer is everyone. Cyber threat intelligence is for anyone with a vested interest in the cybersecurity infrastructure of an organization. Although CTI can be tailored to suit any audience, in most cases, threat intelligence teams work closely with the Security Operation Centre (SOC) that monitors and protects a business on a daily basis. Research shows that CTI has proved beneficial to people at all levels of government (national, regional or local), from security officers, police chiefs and policymakers, to information technology specialists and law enforcement officers. It also provides value to many other professionals, such as IT managers, accountants and criminal analysts. ... The creation of cyber threat intelligence is a circular process known as an “intelligence cycle”. In this cycle, which consists of five stages, data collection is planned, implemented and evaluated; the results are then analysed to produce intelligence, which is later disseminated and re-evaluated against new information and consumer feedback. The circularity of the process means that gaps are identified in the intelligence delivered, initiating new collection requirements and launching the intelligence cycle all over again.


Securing AI’s new frontier: Visibility, governance, and mitigating compliance risks

Securing and governing the use of data for AI/ML model training is perhaps the most challenging and pressing issue in AI security. Using confidential or protected information during the training or fine-tuning process comes with the risk that data could be recoverable through model extraction techniques or using common adversarial techniques (i.e., prompt injection, jailbreak). Following data security and least-privilege access best practices is essential for protecting data during development, but bespoke AI runtime threat detection is response is required to avoid exfiltration of data via model responses. ... Securing AI applications in production is equally important as securing the underlying infrastructure and is a key component of maintaining a secure data and AI lifecycle. This requires real-time monitoring of both prompts and responses to identify, notify, and block security and safety threats. A robust AI security solution prevents adversarial attacks like prompt injection, masks sensitive data to prevent exfiltration via a model response, and also addresses safety concerns such as bias, fairness, and harmful content. 



Quote for the day:

"Leading people is like cooking. Don_t stir too much; It annoys the ingredients_and spoils the food" -- Rick Julian

Daily Tech Digest - October 16, 2024

AI Models in Cybersecurity: From Misuse to Abuse

In a constant game of whack-a-mole, both defenders and attackers are harnessing AI to tip the balance of power in their respective favor. Before we can understand how defenders and attackers leverage AI, we need to acknowledge the three most common types of AI models currently in circulation. ... Generative AI, Supervised Machine Learning, and Unsupervised Machine Learning are three main types of AI models. Generative AI tools such as ChatGPT, Gemini, and Copilot can understand human input and can deliver outputs in a human-like response. Notably, generative AI continuously refines its outputs based on user interactions, setting it apart from traditional AI systems. Unsupervised machine learning models are great at analyzing and identifying patterns in vast unstructured or unlabeled data. Alternatively, supervised machine learning algorithms make predictions from well-labeled, well-tagged, and well-structured datasets. ... Despite the media hype, the usage of AI by cybercriminals is still at nascent stage. This doesn’t mean that AI is not being exploited for malicious purposes, but it’s also not causing the decline of human civilization like some purport it to be. Cybercriminals use AI for very specific tasks


Meet Aria: The New Open Source Multimodal AI That's Rivaling Big Tech

Rhymes AI has released Aria under the Apache 2.0 license, allowing developers and researchers to adapt and build upon the model. It is also a very powerful addition to an expanding pool of open-source AI models led by Meta and Mistral, which perform similarly to the more popular and adopted closed-source models. Aria's versatility also shines across various tasks. In the research paper, the team explained how they fed the model with an entire financial report and it was capable of performing an accurate analysis, it can extract data from reports, calculate profit margins, and provide detailed breakdowns. When tasked with weather data visualization, Aria not only extracted the relevant information but also generated Python code to create graphs, complete with formatting details. The model's video processing capabilities also seem promising. In one evaluation, Aria dissected an hour-long video about Michelangelo's David, identifying 19 distinct scenes with start and end times, titles, and descriptions. This isn't simple keyword matching but a demonstration of context-driven understanding. Coding is another area where Aria excels. It can watch video tutorials, extract code snippets, and even debug them. 


Preparing for IT failures in an unpredictable digital world

By embracing multiple vendors and hybrid cloud environments, organizations would be better prepared so that if one platform goes down, the others can pick up the slack. While this strategy increases ecosystem complexity, it buys down the risk accepted by ensuring you’re prepared to recover and resilient to widespread outages in complex, hybrid, and cloud-based environments. ... It’s clear that IT failures aren’t just a possibility — they are inevitable. Simply waiting for things to go wrong before reacting is a high-risk approach that’s asking for trouble. Instead, organizations must go on the front foot and adopt a strategy that focuses on early detection, continuous monitoring, and risk prevention. This means planning for worst-case scenarios, but also preparing for recovery. After all, one of the planks of IT infrastructure management is business continuity. It’s about optimal performance when things are going well while ensuring that systems recover quickly and continue operating even in the face of major disruptions. This requires a holistic approach to IT management, where failures are anticipated, and recovery plans are in place. 


CIOs must adopt startup agility to compete with tech firms

CIOs often struggle with soft skills, despite knowing what needs to be done. We engage with CEOs and CFOs to foster alignment among the leadership team, as strong support from them is crucial. CIOs also need help gaining buy-in from other CXOs, particularly when it comes to automation initiatives. Our approach emphasises unlocking bandwidth within IT departments. If 90% of their resources are spent on running the business, there’s little time for innovation. We help them automate routine tasks, which allows their best people to focus on transformative efforts. ... CIOs play a crucial role in driving innovation and maintaining cost efficiency while justifying tech investments, especially as organisations become digital-first. A key challenge is controlling cloud costs, which often escalate as IT spending moves outside central control. To counter this, CIOs should streamline access to central services, reduce redundant purchases, and negotiate larger contracts for better discounts. They must also recognise that cloud services are not always cheaper; cost-efficiency depends on application types and usage. 


AI makes edge computing more relevant to CIOs

Many user-facing situations could benefit from edge-based AI. Payton emphasizes facial recognition technology, real-time traffic updates for semi-autonomous vehicles, and data-driven enhancements on connected devices and smartphones as possible areas. “In retail, AI can deliver personalized experiences in real-time through smart devices,” she says. “In healthcare, edge-based AI in wearables can alert medical professionals immediately when it detects anomalies, potentially saving lives.” And a clear win for AI and edge computing is within smart cities, says Bizagi’s Vázquez. There are numerous ways AI models at the edge could help beyond simply controlling traffic lights, he says, such as citizen safety, autonomous transportation, smart grids, and self-healing infrastructures. To his point, experiments with AI are already being carried out in cities such as Bahrain, Glasgow, and Las Vegas to enhance urban planning, ease traffic flow, and aid public safety. Self-administered, intelligent infrastructure is certainly top of mind for Dairyland’s Melby since efforts within the energy industry are underway to use AI to meet emission goals, transition into renewables, and increase the resilience of the grid.


Deepfake detection is a continuous process of keeping up with AI-driven fraud: BioID

BioID is part of the growing ecosystem of firms offering algorithmic defenses to algorithmic attacks. It provides an automated, real-time deepfake detection tool for photos and videos that analyzes individual frames and video sequences, looking for inter-frame or video codec anomalies. Its algorithm is the product of a German research initiative that brought together a number of institutions across sectors to collaborate on deepfake detection strategy. But it is also continuing to refine its neural network to keep up with the relentless pace of AI fraud. “We are in an ongoing fight of AI against AI,” Freiberg says. “We can’t just just lean back and relax and sell what we have. We’re continuously working on increasing the accuracy of our algorithms.” That said, Freiberg is not only offering doom and gloom. She points to the Ukrainian Ministry of Foreign Affairs AI ambassador, Victoria Shi, as an example of deepfake technology used with non-fraudulent intention. The silver lining is reflected in the branding of BioID’s “playground” for AI deepfake testing. At playground.bioid.com, users can upload media to have BioID judge whether or not it is genuine.


How Manufacturing Best Practices Shape Software Development

Manufacturers rely on bills of materials (BOMs) to track every component in their products. This transparency enables them to swiftly pinpoint the source of any issues that arise, ensuring they have a comprehensive understanding of their supply chain. In software, this same principle is applied through software bills of materials (SBOMs), which list all the components, dependencies and licenses used in a software application. SBOMs are increasingly becoming critical resources for managing software supply chains, enabling developers and security teams to maintain visibility over what’s being used in their applications. Without an SBOM, organizations risk being unaware of outdated or vulnerable components in their software, making it difficult to address security issues. ... It’s nearly impossible to monitor open source components manually at scale. But with software composition analysis, developers can automate the process of identifying security risks and ensuring compliance. Automation not only accelerates development but also reduces the risk of human error, so teams can manage vast numbers of components and dependencies efficiently.


Striking The Right Balance Between AI & Innovation & Evolving Regulation

The bottom line is that integrating AI comes with complex challenges to how an organisation approaches data privacy. A significant part of this challenge relates to purpose limitation – specifically, the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. To tackle this hurdle, it’s vital that organisations maintain a high level of transparency that discloses to users and consumers how the use of their data is evolving as AI is integrated. ... Just as the technology landscape has evolved, so have consumer expectations. Today, consumers are more conscious of and concerned with how their data is used. Adding to this, nearly two-thirds of consumers worry about AI systems lacking human oversight, and 93% believe irresponsible AI practices damage company reputations. As such, it’s vital that organisations are continuously working to maintain consumer trust as part of their AI strategy. With this said, there are many consumers who are willing to share their data as long as they receive a better personalised customer experience, showcasing that this is a nuanced landscape that requires attention and balance.


WasmGC and the future of front-end Java development

The approach being offered by the WasmGC extension is newer. The extension provides a generic garbage collection layer that your software can refer to; a kind of garbage collection layer built into WebAssembly. Wasm by itself doesn’t track references to variables and data structures, so the addition of garbage collection also implies introducing new “typed references” into the specification. This effort is happening gradually: recent implementations support garbage collection on “linear” reference types like integers, but complex types like objects and structs have also been added. ... The performance potential of languages like Java over JavaScript is a key motivation for WasmGC, but obviously there’s also the enormous range of available functionality and styles among garbage-collected platforms. The possibility for moving custom code into Wasm, and thereby making it universally deployable, including to the browser, is there. More broadly, one can’t help but wonder about the possibility of opening up the browser to other languages beyond JavaScript, which could spark a real sea-change to the software industry. It’s possible that loosening JavaScript’s monopoly on the browser will instigate a renaissance of creativity in programming languages.


Mind Your Language Models: An Approach to Architecting Intelligent Systems

The reason why we wanted a smaller model that's adapted to a certain task is, it's easier to operate, and when you're running LLMs, it's going to be much economical, because you can't run massive models all the time because it's very expensive and takes a lot of GPUs. Currently, we're struggling with getting GPUs in AWS. We searched all EU Frankfurt, Ireland, North Virginia. It's seriously a challenge now to get big GPUs to host your LLMs. The second part of the problem is, we started getting data. It's high quality. We started improving the knowledge graph. The one thing that is interesting when you think about semantic search is that when people interact with your system, even if they're working on the same problem, they don't end up using the same language. Which means that you need to be able to translate or understand the range of language that your users can actually interact with your system. ... We converted these facts with all of their synonyms, with all of the different ways one could potentially ask for this piece of data, and put everything into the knowledge graph itself. You could use LLMs to generate training data for your smaller models. 



Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." -- Philippos