Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Daily Tech Digest January 17, 2025

The Architect’s Guide to Understanding Agentic AI

All business processes can be broken down into two planes: a control plane and a tools plane. See the graphic below. The tools plane is a collection of APIs, stored procedures and external web calls to business partners. However, for organizations that have started their AI journey, it could also include calls to traditional machine learning models (wave No. 1) and LLMs (wave No. 2) operating in “one-shot” mode. ... The promise of agentic AI is to use LLMs with full knowledge of an organization’s tools plane and allow them to build and execute the logic needed for the control plane. This can be done by providing a “few-shot” prompt to an LLM that has been fine-tuned on an organization’s tools plane. Below is an example of a “few-shot” prompt that answers the same hypothetical question presented earlier. This is also known as letting the LLM think slowly. ... If agentic AI still seems to be made up of too much magic, then consider the simple example below. Every developer who has to write code daily probably asks an LLM a question similar to the one below. ... Agentic AI is the next logical evolution of AI. It is based on capabilities with a solid footing in AI’s first and second waves. The promise is the use of AI to solve more complex problems by allowing them to plan, execute tasks and revise— in other words, allowing them to think slowly. This also promises to produce more accurate responses.


AI datacenters putting zero emissions promises out of reach

Datacenters' use of water and land are other bones of contention, which in combination with their reliance on tax breaks and the limited number of local jobs they deliver, will see them face growing opposition from local residents and environmental groups. Uptime highlights that many governments have set targets for GHG emissions to become net-zero by a set date, but warns that because the AI boom look set to test power availability, it will almost certainly put these pledges out of reach. ... Many governments seem convinced of the economic benefits promised by AI at the expense of other concerns, the report notes. The UK is a prime example, this week publishing the AI Opportunities Action Plan and vowing to relax planning rules to prioritize datacenter builds. ... Increasing rack power presents several challenges, the report warns, including the sheer space taken up by power distribution infrastructure such as switchboards, UPS systems, distribution boards, and batteries. Without changes to the power architecture, many datacenters risk becoming an electrical plant built around a relatively small IT room. Solving this will call for changes such as medium-voltage (over 1 kV) distribution to the IT space and novel power distribution topologies. However, this overhaul will take time to unfold, with 2025 potentially a pivotal year for investment to make this possible.


State of passkeys 2025: passkeys move to mainstream

One of the critical factors driving passkeys into mainstream is the full passkey-readiness of devices, operating systems and browsers. Apple (iOS, macOS, Safari), Google (Android, Chrome) and Microsoft (Windows, Edge) have fully integrated passkey support across their platforms: Over 95 percent of all iOS & Android devices are passkey-ready; and Over 90 percent of all iOS & Android devices have passkey functionality enabled. With Windows soon supporting synced passkeys, all major operating systems ensure users can securely and effortlessly access their credentials across devices. ... With full device support, a polished UX, growing user familiarity, and a proven track record among early adopter implementations, there’s no reason for businesses to delay adopting passkeys. The business advantages of passkeys are compelling. Companies that previously relied on SMS-based authentication can save considerably on SMS costs. Beyond that, enterprises adopting passkeys benefit from reduced support overhead (since fewer password resets are needed), lower risk of breaches (thanks to phishing-resistance), and optimized user flows that improve conversion rates. Collectively, these perks make a convincing business case for passkeys.


Balancing usability and security in the fight against identity-based attacks

AI and ML are a double-edged sword in cybersecurity. On one hand, cybercriminals are using these technologies to make their attacks faster and wiser. They can create highly convincing phishing emails, generate deepfake content, and even find ways to bypass traditional security measures. For example, generative AI can craft emails or videos that look almost real, tricking people into falling for scams. On the flip side, AI and ML are also helping defenders. These technologies allow security systems to quickly analyze vast amounts of data, spotting unusual behavior that might indicate compromised credentials. ... Targeted security training can be useful but generally you want to reduce the human dependency as much as possible. This is why controls that can meet a user where they are at is critical. If you can deliver point-in-time guidance, or straight up technically prevent something like a user entering their password into a phishing site, it significantly reduces the dependency on the human to make the right decision unassisted every time. When you consider how hard it can be for even security professionals to spot the more sophisticated phishing sites, it’s essential that we help people out as much as possible with technical controls.


Understanding Leaderless Replication for Distributed Data

Leaderless replication is another fundamental replication approach for distributed systems. It alleviates problems of multi-leader replication while, at the same time, it introduces its own problems. Write conflicts in multi-leader replication are tackled in leaderless replication with quorum-based writes and systematic conflict resolution. Cascading failures, synchronization overhead, and operational complexity can be handled in leaderless replication via its decentralized architecture. Removing leaders can simplify cluster management, failure handling,g and recovery mechanisms. Any replica can handle writes/reads. ... Direct writes, and coordination-based replication are the most common approaches in leaderless replication. In the first approach, clients write directly to node replicas, while in the second approach, there exist coordinator-mediated writes. It is worth mentioning that, unlike the leader-follower concept, coordinators in leaderless replication do not enforce a particular ordering of writes. ... Failure handling is one of the most challenging aspects of both approaches. While direct writes provide better theoretical availability, they can be problematic during failure scenarios. Coordinator-based systems can provide clearer failure semantics but at the cost of potential coordinator bottlenecks.


Blockchain in Banking: Use Cases and Examples

Bitcoin has entered a space usually reserved for gold and sovereign bonds: national reserves. While the U.S. Federal Reserve maintains that it cannot hold Bitcoin under current regulations, other financial systems are paying close attention to its potential role as a store of value. On the global stage, Bitcoin is being viewed not just as a speculative asset but as a hedge against inflation and currency volatility. Governments are now debating whether digital assets can sit alongside gold bars in their vaults. Behind all this activity lies blockchain - providing transparency, security, and a framework for something as ambitious as a digital reserve currency. ... Financial assets like real estate, investment funds, or fine art are traditionally expensive, hard to divide, and slow to transfer. Blockchain changes this by converting these assets into digital tokens, enabling fractional ownership and simplifying transactions. UBS launched its first tokenized fund on the Ethereum blockchain, allowing investors to trade fund shares as digital assets. This approach reduces administrative costs, accelerates settlements, and improves accessibility for investors. Additionally, one of Central and Eastern Europe’s largest banks has tokenized fine art on Aleph Zero blockchain. This enables fractional ownership of valuable art pieces while maintaining verifiable proof of ownership and authenticity.


Decentralized AI in Edge Computing: Expanding Possibilities

Federated learning enables decentralized training of AI models directly across multiple edge devices. This approach eliminates the need to transfer raw data to a central server, preserving privacy and reducing bandwidth consumption. Models are trained locally, with only aggregated updates shared to improve the global system. ... Localized data processing empowers edge devices to conduct real-time analytics, facilitating faster decision-making and minimizing reliance on central frameworks. This capability is fundamental for applications such as autonomous vehicles and industrial automation, where even milliseconds can be vital. ... Blockchain technology is pivotal in decentralized AI for edge computing by providing a secure, immutable ledger for data sharing and task execution across edge nodes. It ensures transparency and trust in resource allocation, model updates, and data verification processes. ... By processing data directly at the edge, decentralized AI removes the delays in sending data to and from centralized servers. This capability ensures faster response times, enabling near-instantaneous decision-making in critical real-time applications. ... Decentralized AI improves privacy protocols by empowering the processing of sensitive information locally on the device rather than sending it to external servers.


The Myth of Machine Learning Reproducibility and Randomness

The nature of ML systems contributes to the challenge of reproducibility. ML components implement statistical models that provide predictions about some input, such as whether an image is a tank or a car. But it is difficult to provide guarantees about these predictions. As a result, guarantees about the resulting probabilistic distributions are often given only in limits, that is, as distributions across a growing sample. These outputs can also be described by calibration scores and statistical coverage, such as, “We expect the true value of the parameter to be in the range [0.81, 0.85] 95 percent of the time.” ... There are two basic techniques we can use to manage reproducibility. First, we control the seeds for every randomizer used. In practice there may be many. Second, we need a way to tell the system to serialize the training process executed across concurrent and distributed resources. Both approaches require the platform provider to include this sort of support. ... Despite the importance of these exact reproducibility modes, they should not be enabled during production. Engineering and testing should use these configurations for setup, debugging and reference tests, but not during final development or operational testing.


The High-Stakes Disconnect For ICS/OT Security

ICS technologies, crucial to modern infrastructure, are increasingly targeted in sophisticated cyber-attacks. These attacks, often aimed at causing irreversible physical damage to critical engineering assets, highlight the risks of interconnected and digitized systems. Recent incidents like TRISIS, CRASHOVERRIDE, Pipedream, and Fuxnet demonstrate the evolution of cyber threats from mere nuisances to potentially catastrophic events, orchestrated by state-sponsored groups and cybercriminals. These actors target not just financial gains but also disruptive outcomes and acts of warfare, blending cyber and physical attacks. Additionally, human-operated Ransomware and targeted ICS/OT ransomware pose concerns being on the rise in recent times. ... Traditional IT security measures, when applied to ICS/OT environments, can provide a false sense of security and disrupt engineering operations and safety. Thus, it is important to consider and prioritize the SANS Five ICS Cybersecurity Critical Controls. This freely available whitepaper sets forth the five most relevant critical controls for an ICS/OT cybersecurity strategy that can flex to an organization's risk model and provides guidance for implementing them.


Execs are prioritizing skills over degrees — and hiring freelancers to fill gaps

Companies are adopting more advanced approaches to assessing potential and current employee skills, blending AI tools with hands-on evaluations, according to Monahan. AI-powered platforms are being used to match candidates with roles based on their skills, certifications, and experience. “Our platform has done this for years, and our new UMA (Upwork’s Mindful AI) enhances this process,” she said. Gartner, however, warned that “rapid skills evolutions can threaten quality of hire, as recruiters struggle to ensure their assessment processes are keeping pace with changing skills. Meanwhile, skills shortages place more weight on new hires being the right hires, as finding replacement talent becomes increasingly challenging. Robust appraisal of candidate skills is therefore imperative, but too many assessments can lead to candidate fatigue.” ... The shift toward skills-based hiring is further driven by a readiness gap in today’s workforce. Upwork’s research found that only 25% of employees feel prepared to work effectively alongside AI, and even fewer (19%) can proactively leverage AI to solve problems. “As companies navigate these challenges, they’re focusing on hiring based on practical, demonstrated capabilities, ensuring their workforce is agile and equipped to meet the demands of a rapidly evolving business landscape,” Monahan said.



Quote for the day:

“If you set your goals ridiculously high and it’s a failure, you will fail above everyone else’s success.” -- James Cameron

Daily Tech Digest - November 18, 2024

3 leadership lessons we can learn from ethical hackers

By nature, hackers possess a knack for looking beyond the obvious to find what’s hidden. They leverage their ingenuity and resourcefulness to address threats and anticipate future risks. And most importantly, they are unafraid to break things to make them better. Likewise, when leading an organization, you are often faced with problems that, from the outside, look unsurmountable. You must handle challenges that threaten your internal culture or your product roadmap, and it’s up to you to decide the right path toward progress. Now is the most critical time to find those hidden opportunities to strengthen your organization and remain fearless in your decisions toward a stronger path. ... Leaders must remove ego and cultivate open communication within their organizations. At HackerOne, we build accountability through company-wide weekly Ask Me Anything (AMA) sessions to share organizational knowledge, ask tough questions about the business, and encourage employees to share their perspectives openly without fear of retaliation. ... Most hackers are self-taught enthusiasts. Young and without formal cybersecurity training, they are driven by a passion for their craft. Internal drive propels them to continue their search for what others miss. If there is a way to see the gaps, they will find them. 


So, you don’t have a chief information security officer? 9 signs your company needs one

The cost to hire and retain a CISO is a major stumbling block for some organizations. Even promoting someone from within to a newly created CISO post can be expensive: total compensation for a full-time CISO in the US now averages $565,000 per year, not including other costs that often come with filling the position. ... Running cybersecurity on top of their own duties can be a tricky balancing act for some CIOs, says Cameron Smith, advisory lead for cybersecurity and data privacy at Info-Tech Research Group in London, Ontario. “A CIO has a lot of objectives or goals that don’t relate to security, and those sometimes conflict with one another. Security oftentimes can be at odds with certain productivity goals. But both of those (roles) should be aimed at advancing the success of the organization,” Smith says. ... A virtual CISO is one option for companies seeking to bolster cybersecurity without a full-time CISO. Black says this approach could make sense for companies trying to lighten the load of their overburdened CIO or CTO, as well as firms lacking the size, budget, or complexity to justify a permanent CISO. ... Not having a CISO in place could cost your company business with existing clients or prospective customers who operate in regulated sectors, expect their partners or suppliers to have a rigorous security framework, or require it for certain high-level projects.
Most importantly, AI agents can bring advanced capabilities, including real-time data analysis, predictive modeling, and autonomous decision-making, available to a much wider group of people in any organization. That, in turn, gives companies a way to harness the full potential of their data. Simply put, AI agents are rapidly becoming essential tools for business managers and data analysts in industrial businesses, including those in chemical production, manufacturing, energy sectors, and more. ... In the chemical industry, AI agents can monitor and control chemical processes in real time, minimizing risks associated with equipment failures, leaks, or hazardous reactions. By analyzing data from sensors and operational equipment, AI agents can predict potential failures and recommend preventive maintenance actions. This reduces downtime, improves safety, and enhances overall production efficiency. ... AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries. For business managers and data analysts, the key takeaway is clear: AI agents are not just a future possibility—they are a present necessity, capable of driving efficiency, innovation, and growth in today’s competitive industrial environment.


Want to Modernize Your Apps? Start By Modernizing Your Software Delivery Processes

A healthier approach to app modernization is to focus on modernizing your processes. Despite momentous changes in application deployment technology over the past decade or two, the development processes that best drive software innovation and efficiency — like the interrelated concepts and practices of agile, continuous integration/continuous delivery (CI/CD) and DevOps — have remained more or less the same. This is why modernizing your application delivery processes to take advantage of the most innovative techniques should be every business’s real focus. When your processes are modern, your ability to leverage modern technology and update apps quickly to take advantage of new technology follows naturally. ... In addition to modifying processes themselves, app modernization should also involve the goal of changing the way organizations think about processes in general. By this, I mean pushing developers, IT admins and managers to turn to automation by default when implementing processes. This might seem unnecessary because plenty of IT professionals today talk about the importance of automation. Yet, when it comes to implementing processes, they tend to lean toward manual approaches because they are faster and simpler to implement initially. 


The ‘Great IT Rebrand’: Restructuring IT for business success

To champion his reimagined vision for IT, BBNI’s Nester stresses the art of effective communication and the importance of a solid marketing campaign. In partnership with corporate communications, Nester established the Techniculture brand and lineup of related events specifically designed to align technology, business, and culture in support of enterprise goals. Quarterly Techniculture town hall meetings anchored by both business and technology leaders keep the several hundred Technology Solutions team members abreast of business priorities and familiar with the firm’s money-making mechanics, including a window into how technology helps achieve specific revenue goals, Nester explains. “It’s a can’t-miss event and our largest team engagement — even more so than the CEO videos,” he contends. The next pillar of the Techniculture foundation is Techniculture Live, an annual leadership summit. One third of the Technology Solutions Group, about 250 teammates by Nester’s estimates, participate in the event, which is not a deep dive into the latest technologies, but rather spotlights business performance and technology initiatives that have been most impactful to achieving corporate goals.


The Role of DSPM in Data Compliance: Going Beyond CSPM for Regulatory Success

DSPM is a data-focused approach to securing the cloud environment. By addressing cloud security from the angle of discovering sensitive data, DSPM is centered on protecting an organization’s valuable data. This approach helps organizations discover, classify, and protect data across all platforms, including IaaS, PaaS, and SaaS applications. Where CSPM is focused on finding vulnerabilities and risks for teams to remediate across the cloud environment, DSPM “gives security teams visibility into where cloud data is stored” and detects risks to that data. Security misconfigurations and vulnerabilities that may result in the exposure of data can be flagged by DSPM solutions for remediation, helping to protect an organization’s most sensitive resources. Beyond simply discovering sensitive data, DSPM solutions also address many questions of data access and governance. They provide insight into not only where sensitive data is located, but which users have access to it, how it is used, and the security posture of the data store. ... Every organization undoubtedly has valuable and sensitive enterprise, customer, and employee data that must be protected against a wide range of threats. Organizations can reap a great deal of benefits from DSPM in protecting data that is not stored on-premises.


The hidden challenges of AI development no one talks about

Currently, AI developers spend too much of their time (up to 75%) with the "tooling" they need to build applications. Unless they have the technology to spend less time tooling, these companies won't be able to scale their AI applications. To add to technical challenges, nearly every AI startup is reliant on NVIDIA GPU compute to train and run their AI models, especially at scale. Developing a good relationship with hardware suppliers or cloud providers like Paperspace can help startups, but the cost of purchasing or renting these machines quickly becomes the largest expense any smaller company will run into. Additionally, there is currently a battle to hire and keep AI talent. We've seen recently how companies like OpenAI are trying to poach talent from other heavy hitters like Google, which makes the process for attracting talent at smaller companies much more difficult. ... Training a Deep Learning model is almost always extremely expensive. This is a result of the combined function of resource costs for the hardware itself, data collection, and employees. In order to ameliorate this issue facing the industry's newest players, we aim to achieve several goals for our users: Creating an easy-to-use environment, introducing an inherent replicability across our products, and providing access at as low costs as possible.


Transforming code scanning and threat detection with GenAI

The complexity of software components and stacks can sometimes be mind-bending, so it is imperative to connect all these dots in as seamless and hands-free a way as possible. ... If you’re a developer with a mountain of feature requests and bug fixes on your plate and then receive a tsunami of security tickets that nobody’s incentivized to care about… guess which ones are getting pushed to the bottom of the pile? Generative AI-based agentic workflows are sparking the flames of cybersecurity and engineering teams alike to see the light at the end of the tunnel and consider the possibility that SSDLC is on the near-term horizon. And we’re seeing some promising changes already today in the market. Imagine having an intelligent assistant that can automatically track issues, figure out which ones matter most, suggest fixes, and then test and validate those fixes, all at the speed of computing! We still need our developers to oversee things and make the final calls, but the software agent swallows most of the burden of running an efficient program. ... AI’s evolution in code scanning fundamentally reshapes our approach to security. Optimized generative AI LLMs can assess millions of lines of code in seconds and pay attention to even the most subtle and nuanced set of patterns, finding the needle in a haystack, which is almost always by humans.


5 Tips for Optimizing Multi-Region Cloud Configurations

Multi-region cloud configurations get very complicated very quickly, especially for active-active environments where you’re replicating data constantly. Containerized microservice-based applications allow for faster startup times, but they also drive up the number of resources you’ll need. Even active-passive environments for cold backup-and-restore use cases are resource-heavy. You’ll still need a lot of instances, AMI IDs, snapshots, and more to achieve a reasonable disaster recovery turnaround time. ... The CAP theorem forces you to choose only two of the three options: consistency, availability, and partition tolerance. Since we’re configuring for multi-region, partition tolerance is non-negotiable, which leaves a battle between availability and consistency. Yes, you can hold onto both, but you’ll drive high costs and an outsized management burden. If you’re running active-passive environments, opt for consistency over availability. This allows you to use Platform-as-a-Service (PaaS) solutions to replicate your database to your passive region. ... For active-passive environments, routing isn’t a serious concern. You’ll use default priority global routing to support failover handling, end of story. But for active-active environments, you’ll want different routing policies depending on the situation in that region.


Why API-First Matters in an AI-Driven World

Implementing an API-first approach at scale is a nontrivial exercise. The fundamental reason for this is that API-first involves “people.” It’s central to the methodology that APIs are embraced as socio-technical assets, and therefore, it requires a change in how “people,” both technical and non-technical, work and collaborate. There are some common objections to adopting API-First within organizations that raise their head, as well as some newer framings, given the eagerness of many to participate in the AI-hyped landscape. ... Don’t try to design for all eventualities. Instead, follow good extensibility patterns that enable future evolution and design “just enough” of the API based on current needs. There are added benefits when you combine this tactic with API specifications, as you can get fast feedback loops on that design before any investments are made in writing code or creating test suites. ... An API-First approach is powerful precisely because it starts with a use-case-oriented mindset, thinking about the problem being solved and how best to present data that aligns with that solution. By exposing data thoughtfully through APIs, companies can encapsulate domain-specific knowledge, apply business logic, and ensure that data is served securely, self-service, and tailored to business needs. 



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves

Daily Tech Digest - November 16, 2024

New framework aims to keep AI safe in US critical infrastructure

According to a release issued by DHS, “this first-of-its kind resource was developed by and for entities at each layer of the AI supply chain: cloud and compute providers, AI developers, and critical infrastructure owners and operators — as well as the civil society and public sector entities that protect and advocate for consumers.” ... Naveen Chhabra, principal analyst with Forrester, said, “while average enterprises may not directly benefit from it, this is going to be an important framework for those that are investing in AI models.” ... Asked why he thinks DHS felt the need to create the framework, Chhabra said that developments in the AI industry are “unique, in the sense that the industry is going back to the government and asking for intervention in ensuring that we, collectively, develop safe and secure AI.” ... David Brauchler, technical director at cybersecurity vendor NCC sees the guidelines as a beginning, pointing out that frameworks like this are just a starting point for organizations, providing them with big picture guidelines, not roadmaps. He described the DHS initiative in an email as “representing another step in the ongoing evolution of AI governance and security that we’ve seen develop over the past two years. It doesn’t revolutionize the discussion, but it aligns many of the concerns associated with AI/ML systems with their relevant stakeholders.”


Building an Augmented-Connected Workforce

An augmented workforce can work faster and more efficiently thanks to seamless access to real-time diagnostics and analytics, as well as live remote assistance, observes Peter Zornio, CTO at Emerson, an automation technology vendor serving critical industries. "An augmented-connected workforce institutionalizes best practices across the enterprise and sustains the value it delivers to operational and business performance regardless of workforce size or travel restrictions," he says in an email interview. An augmented-connected workforce can also help fill some of the gaps many manufacturers currently face, Gaus says. "There are many jobs unfilled because workers aren't attracted to manufacturing, or lack the technological skills needed to fill them," he explains. ... For enterprises that have already invested in advanced digital technologies, the path leading to an augmented-connected workforce is already underway. The next step is ensuring a holistic approach when looking at tangible ways to achieve such a workforce. "Look at the tools your organization is already using -- AI, AR, VR, and so on -- and think about how you can scale them or connect them with your human talent," Gaus says. Yet advanced technologies alone aren't enough to guarantee long-term success.


DORA and why resilience (once again) matters to the board

DORA, though, might be overlooked because of its finance-specific focus. The act has not attracted the attention of NIS2, which sets out cybersecurity standards for 15 critical sectors in the EU economy. And NIS2 came into force in October; CIOs and hard-pressed compliance teams could be forgiven for not focusing on another piece of legislation that is due in the New Year. But ignoring DORA altogether would be short-sighted. Firstly, as Rodrigo Marcos, chair of the EU Council at cybersecurity body CREST points out, DORA is a law, not a framework or best practice guidelines. Failing to comply could lead to penalties. But DORA also covers third-party risks, which includes digital supply chains. The legislation extends to any third party supplying a financial services firm, if the service they supply is critical. This will include IT and communications suppliers, including cloud and software vendors. ... And CIOs are also putting more emphasis on resilience and recovery. In some ways, we have come full circle. Disaster recovery and business continuity were once mainstays of IT operations planning but moved down the list with the move to the cloud. Cyber attacks, and especially ransomware, have pushed both resilience and recovery right back up the agenda.


Data Is Not the New Oil: It’s More Like Uranium

Comparing data to uranium is an accurate analogy. Uranium is radioactive and it is imperative to handle it carefully to avoid radiation exposure, the effects of which are linked to serious health and safety concerns. Issues with the deployment of uranium, such as in reactors, for instance, can lead to radioactive fallouts that are expensive to contain and have long-term health consequences for impacted individuals. The possibility of uranium being stolen poses significant risks and global repercussions. Data exhibits similar characteristics. It is critical for it to be stored safely, and those who experience data theft are forced to deal with long-term consequences – identity theft and financial concerns, for example. An organization experiencing a cyberattack must deal with regulatory oversight and fines. In some cases, losing sensitive data can trigger significant global consequences. ... Maintaining a data chain of custody is paramount. Some companies allow all employees access to all records, which increases the surface area of a cyberattack, and compromised employees could lead to a data breach. Even a single compromised employee computer can lead to a more extensive hack. Consider the case of the nonprofit healthcare network Ascension, which operates 140 hospitals and 40 senior care facilities.


Palo Alto Reports Firewalls Exploited Using an Unknown Flaw

Palo Alto said the flaw is being remotely exploited, has a "critical" severity rating of 9.3 out of 10 on the CVSS scale and that mitigating the vulnerability should be treated with the "highest" urgency. One challenge for users: no patch is yet available to fix the vulnerability. Also, no CVE code has been allocated for tracking it. "As we investigate the threat activity, we are preparing to release fixes and threat prevention signatures as early as possible," Palo Alto said. "At this time, securing access to the management interface is the best recommended action." The company said it doesn't believe its Prisma Access or Cloud NGFW are at risk from these attacks. Cybersecurity researchers confirm that real-world details surrounding the attacks and flaws remain scant. "Rapid7 threat intelligence teams have also been monitoring rumors of a possible zero-day vulnerability, but until now, those rumors have been unsubstantiated," the cybersecurity firm said in a Friday blog post. Palo Alto first warned customers on Nov. 8 that it was investigating reports of a zero-day vulnerability in the management interface for some types of firewalls and urged them to lock down the interfaces. 


Award-winning palm biometrics study promises low-cost authentication

“By harnessing high-resolution mmWave signals to extract detailed palm characteristics,” he continued, “mmPalm presents an ubiquitous, convenient and cost-efficient option to meet the growing needs for secure access in a smart, interconnected world.” The mmPalm method employs mmWave technology, which is widely used in 5G networks, to capture a person’s palm characteristics by sending and analyzing reflected signals and thereby creating a unique palm print for each user. Beyond this, mmPalm also meets the difficulties that can arise in authentication technology like distance and hand orientation. The system uses a type of AI called the Conditional Generative Adversarial Network (cGAN) to learn different palm orientations and distances, and generates virtual profiles to fill in gaps. In addition, the system will adapt to different environments using a transfer learning framework so that mmPalm is suited to various settings. The system also builds virtual antennas to increase the spatial resolution of a commercial mmWave device. Tested with 30 participants over six months, mmPalm displayed a 99 percent accuracy rate and was resistant to impersonation, spoofing and other potential breaches.


Scaling From Simple to Complex Cache: Challenges and Solutions

To scale a cache effectively, you need to distribute data across multiple nodes through techniques like sharding or partitioning. This improves storage efficiency and ensures that each node only stores a portion of the data. ... A simple cache can often handle node failures through manual intervention or basic failover mechanisms. A larger, more complex cache requires robust fault-tolerance mechanisms. This includes data replication across multiple nodes, so if one node fails, others can take over seamlessly. This also includes more catastrophic failures, which may lead to significant down time as the data is reloaded into memory from the persistent store, a process known as warming up the cache. ... As the cache gets larger, pure caching solutions struggle to provide linear performance in terms of latency while also allowing for the control of infrastructure costs. Many caching products were written to be fast at small scale. Pushing them beyond what they were designed for exposes inefficiencies in underlying internal processes. Potential latency issues may arise as more and more data are cached. As a consequence, cache lookup times can increase as the cache is devoting more resources to managing the increased scale rather than serving traffic.


Understanding the Modern Web and the Privacy Riddle

The main question is users’ willingness to surrender their data and not question the usage of this data. This could be attributed to the effect of the virtual panopticon, where users believe they are cooperating with agencies (government or private) that claim to respect their privacy in exchange for services. The Universal ID project (Aadhar project) in India, for instance, began as a means to provide identity to the poor in order to deliver social services, but has gradually expanded its scope, leading to significant function creep. Originally intended for de-duplication and preventing ‘leakages,’ it later became essential for enabling private businesses, fostering a cashless economy, and tracking digital footprints. ... In the modern web, users occupy multiple roles—as service providers, users, and visitors—while adopting multiple personas. This shift requires greater information disclosure, as users benefit from the web’s capabilities and treat their own data as currency. The unraveling of privacy has become the new norm, where withholding information is no longer an option due to the stigmatization of secrecy. Over the past few years, there has been a significant shift in how consumers and websites view privacy. Users have developed a heightened sensitivity to the use of their personal information and now recognize their basic right to internet privacy.


Databases Are a Top Target for Cybercriminals: How to Combat Them

Most ransomware can encrypt pages within a database—Mailto, Sodinokibi (REvil), and Ragnar Locker—and destroy the database pages. This means the slow, unknown encryption of everything, from sensitive customer records to critical networks resources, including Active Director, DNS, and Exchange, and lifesaving patient health information. Because databases can continue to run even with corrupted pages, it can take longer to realize that they have been attacked. Most often, it is the wreckage of the attack that is usually found when the database is taken down for routine maintenance, and by that time, thousands of records could be gone. Databases are an attractive target for cybercriminals because they offer a wealth of information that can be used or sold on the dark web, potentially leading to further breaches and attacks. Industries such as healthcare, finance, logistics, education, and transportation are particularly vulnerable. The information contained in these databases is highly valuable, as it can be exploited for spamming, phishing, financial fraud, and tax fraud. Additionally, cybercriminals can sell this data for significant sums of money on dark web auctions or marketplaces.


The Impact of Cloud Transformation on IT Infrastructure

With digital transformation accelerating across industries, the IT ecosystem comprises traditional and cloud-native applications. This mixed environment demands a flexible, multi-cloud strategy to accommodate diverse application requirements and operational models. The ability to move workloads between public and private clouds has become essential, allowing companies to dynamically balance performance and cost considerations. We are committed to delivering cloud solutions supporting seamless workload migration and interoperability, empowering businesses to leverage the best of public and private clouds. ... With today’s service offerings and various tools, migrating between on-premises and cloud environments has become straightforward, enabling continuous optimization rather than one-time changes. Cloud-native applications, particularly containerization and microservices, are inherently optimized for public and private cloud setups, allowing for dynamic scaling and efficient resource use. To fully optimize, companies should adopt cloud-native principles, including automation, continuous integration, and orchestration, which streamline performance and resource efficiency. Robust tools like identity and access management (IAM), encryption, and automated security updates address security and reliability, ensuring compliance and data protection.



Quote for the day:

"The elevator to success is out of order. You’ll have to use the stairs…. One step at a time.” -- Rande Wilson