Showing posts with label virtualization. Show all posts
Showing posts with label virtualization. Show all posts

Daily Tech Digest - February 07, 2026


Quote for the day:

"Success in almost any field depends more on energy and drive than it does on intelligence. This explains why we have so many stupid leaders." -- Sloan Wilson



Tiny AI: The new oxymoron in town? Not really!

Could SLMs and minituarised models be the drink that would make today’s AI small enough to walk through these future doors without AI bumping into carbon-footprint issues? Would model compression tools like pruning, quantisation, and knowledge distillation help to lift some weight off the shoulders of heavy AI backyards? Lightweight models, edge devices that save compute resources, smaller algorithms that do not put huge stress on AI infrastructures, and AI that is thin on computational complexity- Tiny AI- as an AI creation and adoption approach- sounds unusual and promising at the onset. ... hardware innovations and new approaches to modelling that enable Tiny AI can significantly ease the compute and environmental burdens of large-scale AI infrastructures, avers Biswajeet Mahapatra, principal analyst at Forrester. “Specialised hardware like AI accelerators, neuromorphic chips, and edge-optimised processors reduces energy consumption by performing inference locally rather than relying on massive cloud-based models. At the same time, techniques such as model pruning, quantisation, knowledge distillation, and efficient architectures like transformers-lite allow smaller models to deliver high accuracy with far fewer parameters.” ... Tiny AI models run directly on edge devices, enabling fast, local decision-making by operating on narrowly optimised datasets and sending only relevant, aggregated insights upstream, Acharya spells out. 


Kali Linux vs. Parrot OS: Which security-forward distro is right for you?

The first thing you should know is that Kali Linux is based on Debian, which means it has access to the standard Debian repositories, which include a wealth of installable applications. ... There are also the 600+ preinstalled applications, most of which are geared toward information gathering, vulnerability analysis, wireless attacks, web application testing, and more. Many of those applications include industry-specific modifications, such as those for computer forensics, reverse engineering, and vulnerability detection. And then there are the two modes: Forensics Mode for investigation and "Kali Undercover," which blends the OS with Windows. ... Parrot OS (aka Parrot Security or just Parrot) is another popular pentesting Linux distribution that operates in a similar fashion. Parrot OS is also based on Debian and is designed for security experts, developers, and users who prioritize privacy. It's that last bit you should pay attention to. Yes, Parrot OS includes a similar collection of tools as does Kali Linux, but it also offers apps to protect your online privacy. To that end, Parrot is available in two editions: Security and Home. ... What I like about Parrot OS is that you have options. If you want to run tests on your network and/or systems, you can do that. If you want to learn more about cybersecurity, you can do that. If you want to use a general-purpose operating system that has added privacy features, you can do that.


Bridging the AI Readiness Gap: Practical Steps to Move from Exploration to Production

To bridge the gap between AI readiness and implementation, organizations can adopt the following practical framework, which draws from both enterprise experience and my ongoing doctoral research. The framework centers on four critical pillars: leadership alignment, data maturity, innovation culture, and change management. When addressed together, these pillars provide a strong foundation for sustainable and scalable AI adoption. ... This begins with a comprehensive, cross-functional assessment across the four pillars of readiness: leadership alignment, data maturity, innovation culture, and change management. The goal of this assessment is to identify internal gaps that may hinder scale and long-term impact. From there, companies should prioritize a small set of use cases that align with clearly defined business objectives and deliver measurable value. These early efforts should serve as structured pilots to test viability, refine processes, and build stakeholder confidence before scaling. Once priorities are established, organizations must develop an implementation road map that achieves the right balance of people, processes, and technology. This road map should define ownership, timelines, and integration strategies that embed AI into business workflows rather than treating it as a separate initiative. Technology alone will not deliver results; success depends on aligning AI with decision-making processes and ensuring that employees understand its value. 


Proxmox's best feature isn't virtualization; it's the backup system

Because backups are integrated into Proxmox instead of being bolted on as some third-party add-on, setting up and using backups is entirely seamless. Agents don't need to be configured per instance. No extra management is required, and no scripts need to be created to handle the running of snapshots and recovery. The best part about this approach is that it ensures everything will continue working with each OS update. Backups can be spotted per instance, too, so it's easy to check how far you can go back and how many copies are available. The entire backup strategy within Proxmox is snapshot-based, leveraging localised storage when available. This allows Proxmox to create snapshots of not only running Linux containers, but also complex virtual machines. They're reliable, fast, and don't cause unnecessary downtime. But while they're powerful additions to a hypervised configuration, the backups aren't difficult to use. This is key since it would render the backups less functional if it proved troublesome to use them when it mattered most. These backups don't have to use local storage either. NFS, CIFS, and iSCSI can all be targeted as backup locations.  ... It can also be a mixture of local storage and cloud services, something we recommend and push for with a 3-2-1 backup strategy. But there's one thing of using Proxmox's snapshots and built-in tools and a whole different ball game with Proxmox Backup Server. With PBS, we've got duplication, incremental backups, compression, encryption, and verification.


The Fintech Infrastructure Enabling AI-Powered Financial Services

AI is reshaping financial services faster than most realize. Machine learning models power credit decisions. Natural language processing handles customer service. Computer vision processes documents. But there’s a critical infrastructure layer that determines whether AI-powered financial platforms actually work for end users: payment infrastructure. The disconnect is striking. Fintech companies invest millions in AI capabilities, recommendation engines, fraud detection, personalization algorithms. ... From a technical standpoint, the integration happens via API. The platform exposes user balances and transaction authorization through standard REST endpoints. The card provider handles everything downstream: card issuance logistics, real-time currency conversion, payment network settlement, fraud detection at the transaction level, dispute resolution workflows. This architectural pattern enables fintech platforms to add payment functionality in 8-12 weeks rather than the 18-24 months required to build from scratch. ... The compliance layer operates transparently to end users while protecting platforms from liability. KYC verification happens at multiple checkpoints. AML monitoring runs continuously across transaction patterns. Reporting systems generate required documentation automatically. The platform gets payment functionality without becoming responsible for navigating payment regulations across dozens of jurisdictions.


Context Engineering for Coding Agents

Context engineering is relevant for all types of agents and LLM usage of course. My colleague Bharani Subramaniam’s simple definition is: “Context engineering is curating what the model sees so that you get a better result.” For coding agents, there is an emerging set of context engineering approaches and terms. The foundation of it are the configuration features offered by the tools, and then the nitty gritty of part is how we conceptually use those features. ... One of the goals of context engineering is to balance the amount of context given - not too little, not too much. Even though context windows have technically gotten really big, that doesn’t mean that it’s a good idea to indiscriminately dump information in there. An agent’s effectiveness goes down when it gets too much context, and too much context is a cost factor as well of course. Some of this size management is up to the developer: How much context configuration we create, and how much text we put in there. My recommendation would be to build context like rules files up gradually, and not pump too much stuff in there right from the start. ... As I said in the beginning, these features are just the foundation for humans to do the actual work and filling these with reasonable context. It takes quite a bit of time to build up a good setup, because you have to use a configuration for a while to be able to say if it’s working well or not - there are no unit tests for context engineering. Therefore, people are keen to share good setups with each other.


Reimagining The Way Organizations Hire Cyber Talent

The way we hire cybersecurity professionals is fundamentally flawed. Employers post unicorn job descriptions that combine three roles’ worth of responsibilities into one. Qualified candidates are filtered out by automated scans or rejected because their resumes don’t match unrealistic expectations. Interviews are rushed, mismatched, or even faked—literally, in some cases. On the other side, skilled professionals—many of whom are eager to work—find themselves lost in a sea of noise, unable to connect with the opportunities that align with their capabilities and career goals. Add in economic uncertainty, AI disruption and changing work preferences, and it’s clear the traditional hiring playbook simply isn’t working anymore. ... Part of fixing this broken system means rethinking what we expect from roles in the first place. Jones believes that instead of packing every security function into a single job description and hoping for a miracle, organizations should modularize their needs. Need a penetration tester for one month? A compliance SME for two weeks? A security architect to review your Zero Trust strategy? You shouldn’t have to hire full-time just to get those tasks done. ... Solving the cybersecurity workforce challenge won’t come from doubling down on job boards or resume filters. But organizations may be able to shift things in the right direction by reimagining the way they connect people to the work that matters—with clarity, flexibility and mutual trust.


News sites are locking out the Internet Archive to stop AI crawling. Is the ‘open web’ closing?

Publishers claim technology companies have accessed a lot of this content for free and without the consent of copyright owners. Some began taking tech companies to court, claiming they had stolen their intellectual property. High-profile examples include The New York Times’ case against ChatGPT’s parent company OpenAI and News Corp’s lawsuit against Perplexity AI. ... Publishers are also using technology to stop unwanted AI bots accessing their content, including the crawlers used by the Internet Archive to record internet history. News publishers have referred to the Internet Archive as a “back door” to their catalogues, allowing unscrupulous tech companies to continue scraping their content. ... The opposite approach – placing all commercial news behind paywalls – has its own problems. As news publishers move to subscription-only models, people have to juggle multiple expensive subscriptions or limit their news appetite. Otherwise, they’re left with whatever news remains online for free or is served up by social media algorithms. The result is a more closed, commercial internet. This isn’t the first time that the Internet Archive has been in the crosshairs of publishers, as the organisation was previously sued and found to be in breach of copyright through its Open Library project. ... Today’s websites become tomorrow’s historical records. Without the preservation efforts of not-for-profit organisations like The Internet Archive, we risk losing vital records.


Who will be the first CIO fired for AI agent havoc?

As CIOs deploy teams of agents that work together across the enterprise, there’s a risk that one agent’s error compounds itself as other agents act on the bad result, he says. “You have an endless loop they can get out of,” he adds. Many organizations have rushed to deploy AI agents because of the fear of missing out, or FOMO, Nadkarni says. But good governance of agents takes a thoughtful approach, he adds, and CIOs must consider all the risks as they assign agents to automate tasks previously done by human employees. ... Lawsuits and fines seem likely, and plaintiffs will not need new AI laws to file claims, says Robert Feldman, chief legal officer at database services provider EnterpriseDB. “If an AI agent causes financial loss or consumer harm, existing legal theories already apply,” he says. “Regulators are also in a similar position. They can act as soon as AI drives decisions past the line of any form of compliance and safety threshold.” ... CIOs will play a big role in figuring out the guardrails, he adds. “Once the legal action reaches the public domain, boards want answers to what happened and why,” Feldman says. ... CIOs should be proactive about agent governance, Osler recommends. They should require proof for sensitive actions and make every action traceable. They can also put humans in the loop for sensitive agent tasks, design agents to hand off action when the situation is ambiguous or risky, and they can add friction to high-stakes agent actions and make it more difficult to trigger irreversible steps, he says.


Measuring What Matters: Balancing Data, Trust and Alignment for Developer Productivity

Organizations need to take steps over and above these frameworks. It's important to integrate those insights with qualitative feedback. With the right balance of quantitative and qualitative data insights, companies can improve DevEx, increase employee engagement, and drive overall growth. Productivity metrics can only be a game-changer if used carefully and in conjunction with a consultative human-based approach to improvement. They should be used to inform management decisions, not replace them. Metrics can paint a clear picture of efficiency, but only become truly useful once you combine them with a nuanced view of the subjective developer experience. ... People who feel safe at work are more productive and creative, so taking DevEx into account when optimizing processes and designing productivity frameworks includes establishing an environment where developers can flag unrealistic deadlines and identify and solve problems together, faster. Tools, including integrated development environments (IDEs), source code repositories and collaboration platforms, all help to identify the systemic bottlenecks that are disrupting teams' workflows and enable proactive action to reduce friction. Ultimately, this will help you build a better picture of how your team is performing against your KPIs, without resorting to micromanagement. Additionally, when company priorities are misaligned, confusion and complexity follow, which is exhausting for developers, who are forced to waste their energy on bridging the gaps, rather than delivering value.

Daily Tech Digest - January 16, 2025

How DPUs Make Collaboration Between AppDev and NetOps Essential

While GPUs have gotten much of the limelight due to AI, DPUs in the cloud are having an equally profound impact on how applications are delivered and network functions are designed. The rise of DPU-as-a-Service is breaking down traditional silos between AppDev and NetOps teams, making collaboration essential to fully unlock DPU capabilities. DPUs offload network, security, and data processing tasks, transforming how applications interact with network infrastructure. AppDev teams must now design applications with these offloading capabilities in mind, identifying which tasks can benefit most from DPUs—such as real-time data encryption or intensive packet processing. ... AppDev teams must explicitly design applications to leverage DPU-accelerated encryption, while NetOps teams need to configure DPUs to handle these workloads efficiently. This intersection of concerns creates a natural collaboration point. The benefits of this collaboration extend beyond security. DPUs excel at packet processing, data compression, and storage operations. When AppDev and NetOps teams work together, they can identify opportunities to offload compute-intensive tasks to DPUs, dramatically improving application performance. 


The CFO may be the CISO’s most important business ally

“Cybersecurity is an existential threat to every company. Gone are the days where CFOs could only be fired if they ran out of money, cooked the books, or had a major controls outage,” he said. “Lack of adequate resourcing of cybersecurity is an emerging threat to their very existence.” This sentiment reflects the reality that for most organizations cyber threat is the No. 1 business risk today, and this has significant implications for the strategic survival of the enterprise. It’s time for CISOs and CFOs to address the natural barriers to their relationship and develop a strategic partnership for the good of the company. ... CISOs should be aware of a few key strategies for improving collaboration with their CFO counterparts. The first is reverse mentoring. Because CFOs and CISOs come from differing perspectives and lead domains rife with terminology and details that can be quite foreign to the other, reverse mentoring can be important for building a bridge between the two. In such a relationship, the CISO can offer insights into cybersecurity, while simultaneously learning to communicate in the CFO’s financial language. This mutual learning creates a more aligned approach to organizational risk. Second, CISOs must also develop their commercial perspective.


Establishing a Software-Based, High-Availability Failover Strategy for Disaster Mitigation and Recovery

No one should be surprised that cloud services occasionally go offline. If you think of the cloud as “someone else’s computer,” then you recognize there are servers and software behind it all. Someone else is doing their best to keep the lights on in the face of events like human error, natural disasters, and DDoS and other types of cyberattacks. Someone else is executing their disaster response and recovery plan. While the cloud may well be someone else’s computer, when there is a cloud outage that affects your operations, it is your problem. You are at the mercy of someone else to restore services so you can get back online. It doesn’t have to be that way. Cloud-dependent organizations can adopt strategies that allow them to minimize the risk someone else’s outage will knock them offline. One such strategy is to take advantage of hybrid or multi-cloud architecture to achieve operational resiliency and high availability through service redundancy through SANless clustering. Normally a storage area network (SAN) uses local storage to configure clustered nodes on-premises, in the cloud, and to a disaster recovery site. It’s a proven approach, but because it is hardware dependent, it is costly in terms of dollars and computing resources, and comes with additional management demands.


Trusted Apps Sneak a Bug Into the UEFI Boot Process

UEFI is a kind of sacred space — a bridge between firmware and operating system, allowing a machine to boot up in the first place. Any malware that invades this space will earn a dogged persistence through reboots, by reserving its own spot in the startup process. Security programs have a harder time detecting malware at such a low level of the system. Even more importantly, by loading first, UEFI malware will simply have a head start over those security checks that it aims to avoid. Malware authors take advantage of this order of operations by designing UEFI bootkits that can hook into security protocols, and undermine critical security mechanisms like UEFI Secure Boot or HVCI, Windows' technology for blocking unsigned code in the kernel. To ensure that none of this can happen, the UEFI Boot Manager verifies every boot application binary against two lists: "db," which includes all signed and trusted programs, and "dbx," including all forbidden programs. But when a vulnerable binary is signed by Microsoft, the matter is moot. Microsoft maintains a list of requirements for signing UEFI binaries, but the process is a bit obscure, Smolár says. "I don't know if it involves only running through this list of requirements, or if there are some other activities involved, like manual binary reviews where they look for not necessarily malicious, but insecure behavior," he says.


How CISOs Can Build a Disaster Recovery Skillset

In a world of third-party risk, human error, and motivated threat actors, even the best prepared CISOs cannot always shield their enterprises from all cybersecurity incidents. When disaster strikes, how can they put their skills to work? “It is an opportunity for the CISO to step in and lead,” says Erwin. “That's the most critical thing a CISO is going to do in those incidents, and if the CISO isn't capable doing that or doesn't show up and shape the response, well, that's an indication of a problem.” CISOs, naturally, want to guide their enterprises through a cybersecurity incident. But disaster recovery skills also apply to their own careers. “I don't see a world where CISOs don't get some blame when an incident happens,” says Young. There is plenty of concern over personal liability in this role. CISOs must consider the possibility of being replaced in the wake of an incident and potentially being held personally responsible. “Do you have parachute packages like CEOs do in their corporate agreements for employability when they're hired?” Young asks. “I also see this big push of not only … CISOs on the D&O insurance, but they're also starting to acquire private liability insurance for themselves directly.”


Site Reliability Engineering Teams Face Rising Challenges

While AI adoption continues to grow, it hasn't reduced operational burdens as expected. Performance issues are now considered as critical as complete outages. Organizations are also grappling with balancing release velocity against reliability requirements. ... Daoudi suspects that there are a series of contributing factors that have led to the unexpected rise in toil levels. The first is AI systems maintenance: AI systems themselves require significant maintenance, including updating models and managing GPU clusters. AI systems also often need manual supervision due to subtle and hard-to-predict errors, which can increase the operational load. Additionally, the free time created by expediting valuable activities through AI may end up being filled with toilsome tasks, he said. "This trend could impact the future of SRE practices by necessitating a more nuanced approach to AI integration, focusing on balancing automation with the need for human oversight and continuous improvement," Daoudi said. Beyond AI, Daoudi also suspects that organizations are incorrectly evaluating toolchain investments. In his view, despite all the investments in inward-focused application performance management (APM) tools, there are still too many incidents, and the report shows a sentiment for insufficient observability instrumentation.


The Hidden Cost of Open Source Waste

Open source inefficiencies impact organizations in ways that go well beyond technical concerns. First, they drain productivity. Developers spend as much as 35% of their time untangling dependency issues or managing vulnerabilities — time that could be far better spent building new products, paying down technical debt, or introducing automation to drive cost efficiencies. ... Outdated dependencies compound the challenge. According to the report, 80% of application dependencies remain un-upgraded for over a year. While not all of these components introduce critical vulnerabilities, failing to address them increases the risk of undetected security gaps and adds unnecessary complexity to the software supply chain. This lack of timely updates leaves development teams with mounting technical debt and a higher likelihood of encountering issues that could have been avoided. The rapid pace of software evolution adds another layer of difficulty. Dependencies can become outdated in weeks, creating a moving target that’s hard to manage without automation and actionable insights. Teams often play catch-up, deepening inefficiencies and increasing the time spent on reactive maintenance. Automation helps bridge this gap by scanning for risks and prioritizing high-impact fixes, ensuring teams focus on the areas that matter most.


The Virtualization Era: Opportunities, Challenges, and the Role of Hypervisors

Choosing the most appropriate hypervisor requires thoughtful consideration of an organization’s immediate needs and long-term goals. Scalability is a crucial factor, as the selected solution must address current workloads and seamlessly adapt to future demands. A hypervisor that integrates smoothly with an organization’s existing IT infrastructure reduces the risks of operational disruptions and ensures a cost-effective transition. Equally important is the financial aspect, where businesses must look beyond the initial licensing fees to account for potential hidden costs, such as staff training, ongoing support, and any necessary adjustments to workflows. The quality of support the vendor provides, coupled with the strength of the user community, can significantly influence the overall experience, offering critical assistance during implementation and beyond. For many businesses, partnering with Managed Service Providers (MSPs) brings an added layer of expertise, ensuring that the chosen solution delivers maximum value while minimizing risk. The ongoing evolution and transformation of the virtualization market presents both challenges and opportunities. As the foundation for IT efficiency and flexibility, hypervisors remain central to these changes.

 

DORA’s Deadline Looms: Navigating the EU’s Mandate for Threat Led Penetration Testing

It’s hard to defend yourself, if you have no idea what you’re up against, and history and countless news stories are evidence that trying to defend against all manner of digital threat is a fool’s errand. As such, the first step to approaching DORA compliance is profiling not only the threat actors that target the financial services sector, but specifically which actors, and by what Tactics Techniques and Procedures (TTPs), you are likely to be attacked. However, first before you can determine how an actor may view and approach you, you need to know who you are. So, the first profile that must be built is of your own business. Not just financial services, but what sector/aspect, what region, and finally what is the specific risk profile based on the critical assets in organizational, and even partner, infrastructures. The second profile begins with the current population of known actors that target the financial services industry. It then moves to narrowing to the actors known to be aligned with the specific targeting profile. From there, leveraging industry standard models such as the MITRE ATT&CK framework, a graph is created of each actor/group’s understood goals and TTPs, including their traditional and preferred methods of access and exploitation, as well as their capabilities for evasion, persistence and command and control.


With AGI looming, CIOs stay the course on AI partnerships

“The immediate path for CIOs is to leverage gen AI for augmentation rather than replacement — creating tools that help human teams make smarter, faster decisions,” Nardecchia says. “There are very promising results with causal AI and AI agents that give an autonomous-like capability and most solutions still have a human in the loop.” Matthew Gunkel, CIO of IT Solutions at the University of California at Riverside, agrees that IT organizations should keep moving forward regardless of the growing delta between AI technology milestones and actual AI implementations. ... “The rapid advancements in AI technology, including projections for AGI and ACI, present a paradox: While the technology races ahead, enterprise adoption remains in its infancy. This divergence creates both challenges and opportunities for CIOs, employees, and AI vendors,” Priest says. “Rather than speculating on when AGI/ACI will materialize, CIOs would be best served to focus on what preparation is required to be ready for it and to maximize the value from it.” Sid Nag, vice president at Gartner, agrees that CIOs should train their attention on laying the foundation for AI and addressing important matters such as privacy, ethics, legal issues, and copyright issues, rather than focus on AGI advances.



Quote for the day:

"When you practice leadership,The evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - January 13, 2025

Artificial intelligence is optimising the entire M&A lifecycle by providing data-driven insights at every stage to enable informed decisions. Companies considering a merger or acquisition can use AI to understand market trends, performance of past deals, and other events of relevance to decide the way forward. On the potential candidates, big data, analytics and AI algorithms help process vast corporate information from a variety of sources – financial statements, analyst briefings, media reports, and more– to identify acquisition targets meeting their requirements. AI augment the experts in due diligence performing complex financial modelling or reviewing extensive legal documents, conduct risk analysis with higher accuracy at a fraction of the time, compared to existing methods. ... For the legacy enterprise system, at times replacing with a cloud-based solution, organisations can become operational within six to fourteen months, depending on size, which is much faster than the time taken in a traditional on-premise scenario. ... Differences in the merging companies’ technology architectures, tools and configurations, make it extremely challenging to ascertain M&A security posture accurately, completely, and on time, even if the organisations are already on the same cloud.


Time for a change: Elevating developers’ security skills

With detection and remediation tools trivializing code security in the same environments they trained with, it’s not unreasonable to think that junior engineers could maintain the ability to perform this basic task as well as maintain an understanding of the risks and consequences of the vulnerabilities they create as they draft code. For mid-level engineers, given the increased security proficiency earlier in their careers, it can now be expected that it’s their responsibility to necessitate code security with their engineers, before it is even reviewed by senior developers. ... For this effort, developers get a pretty substantial boost to their skill set with this deepened security knowledge, which can be very valuable given the current state of affairs for hiring cybersecurity professionals with a dearth of talent available, growing backlogs, and increasing cybersecurity risks in number and scope. Most importantly, they can achieve it without sacrificing productivity – detecting and remediating vulnerabilities can be done as easily as spellcheck finds spelling errors, and training can be short and tailored to what they’re working on, all within the integrated development environment (IDE) they work in every day. ... In addition, organizations can finally achieve the vision of true shift-left by integrating security into every level of the SDLC and adopt the culture of security they’ve rightly been clamoring for.


How Your Digital Footprint Fuels Cyberattacks — and What to Do About It

If you are like most of us, you have been using digital services for years not realizing that you have been giving hackers access to the details of your personal life. On social media, we voluntarily share PII about who we are and where we are, using the location check-in features. ... Reducing your digital footprint doesn’t have to mean going off the grid. Here are some practical steps you can take — Use separate emails for different accounts: Don’t rely on one email for everything. This minimizes the damage if one account is hacked — it won’t lead hackers to all your other services. Review privacy settings regularly: Many apps have default settings that overshare your information. For instance, on apps like Strava or Telegram, you can turn off location tracking and limit who can contact you or add you to conversations. A quick check of these settings can significantly reduce your exposure. Avoid saving passwords in web browsers: Browsers prioritize convenience, not security. Instead, use a password manager. These tools securely store your passwords and can generate strong, unique ones for each account. This reduces the risk of malware or phishing attacks stealing your credentials directly from your browser. Think before you post: Share less on social media, especially in real time. This will make you harder to track and target.


What is career catfishing, the Gen Z strategy to irk ghosting corporates?

After slogging through the exhausting process of job hunting — submitting countless applications, enduring endless rounds of interviews, and anxiously waiting for updates from unresponsive hiring managers — Gen Z workers have found a way to reclaim the balance of power. The rising trend, dubbed “career catfishing,” involves Gen Zs (those aged 27 and under) accepting job offers only to never show up on their first day. According to a survey by CV Genius, which polled 1,000 UK employees across generations, approximately 34 per cent of Zoomers admitted to engaging in career catfishing. ... Gen Z alone cannot shoulder the blame for the rise of such behaviours. Office ghosting — where one party cuts off communication without notice — is now a common phenomenon. ... Managers and owners identified entitlement, motivation, lack of effort, and productivity as reasons for terminating Gen Z employees. Some even referred to them as the snowflake generation and claimed they were too easily offended, which further justified their dismissal. The practice of career catfishing could further reinforce these stereotypes, making it even harder for young professionals to build trust with potential employers.


The next AI wave — agents — should come with warning labels

AI agents that use unclean data can introduce errors, inconsistencies, or missing values that make it difficult for the model to make accurate predictions or decisions. If the dataset has missing values for certain features, for instance, the model might incorrectly assume relationships or fail to generalize well to new data. An agent could also draw data from individuals without consent or use data that’s not anonymized properly, potentially exposing personally identifiable information. Large datasets with missing or poorly formatted data can also slow model training and cause it to consume more resources, making it difficult to scale the system. In addition, while AI agents must also comply with the European Union’s AI Act and similar regulations, innovation will quickly outpace those rules. Businesses must not only ensure compliance but also manage various risks, such as misrepresentation, policy overrides, misinterpretation, and unexpected behavior. “These risks will influence AI adoption, as companies must assess their risk tolerance and invest in proper monitoring and oversight,” according to a Forrester Research report — “The State Of AI Agents” — published in October. 


Euro-cloud Anexia moves 12,000 VMs off VMware to homebrew KVM platform

“We used to pay for VMware software one month in arrears,” he said. “With Broadcom we had to pay a year in advance with a two-year contract.” That arrangement, the CEO said, would have created extreme stress on company cashflow. “We would not be able to compete with the market,” he said. “We had customers on contracts, and they would not pay for a price increase.” Windbichler considered legal action, but felt the fight would have been slow and expensive. Anexia therefore resolved to migrate, a choice made easier by its ownership of another hosting business called Netcup that ran on a KVM-based platform. Another factor in the company’s favour was that it disguised the fact it ran VMware with an abstraction layer it called “Anexia Engine” that meant customers never saw Virtzilla’s wares and instead worked in a different interface to manage their VM fleets. ... The CEO thinks more companies will move from VMware. “I do not believe Broadcom will be successful,” he told The Register. “They lost all the trust. I have talked to so many VMware customers and they say they cannot work with a company like that.” Regulators are also interested in Broadcom’s practices, he said.


Preparing for AI regulation: The EU AI Act

Among the uses of AI that are banned under Article 5 are AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques. Article 5 also prohibits the use of AI systems that exploit any of the vulnerabilities of a person or a specific group of people due to their age, disability, or a specific social or economic situation. Systems that analyse social behaviours and then use this information in a detrimental way are also prohibited under Article 5 if their use goes beyond the original intent of the data collection. Other areas covered by Article 5 include the use of AI systems in law enforcement and biometrics. Industry observers describe the act as a “risk-based” approach to regulating artificial intelligence. ... Organisations operating in the EU will need to take into account CSRD. Given the power-hungry nature of machine learning and AI inference, the extent to which AI is used may well be influenced by such regulations going forward. While it builds on existing regulations, as Mélanie Gornet and Winston Maxwell note in the Hal Open Science paper The European approach to regulating AI through technical standards, the AI Act takes a different route from these. Their observation is that the EU AI Act draws inspiration from European product safety rules.


Enterprise Data Architecture: A Decade of Transformation and Innovation

Privacy and compliance drive architectural decisions. The One Identity Graph we developed manages complex customer relationships while ensuring CCPA and GDPR compliance. This graph-based solution has prevented data breaches and reduced regulatory risks by implementing automated data lineage tracking, consent management, and real-time data masking. These features reinforce customer trust through transparent data handling and granular access controls. The business impact proves substantial. The platform’s real-time fraud detection analyzes transaction patterns across multiple channels, preventing fraudulent activities before completion. It optimizes inventory dynamically across thousands of locations by simultaneously processing point-of-sale data, supply chain updates, and external market factors. Supply chain disruptions trigger immediate alerts through a sophisticated event correlation engine, enabling preventive action before customer impact. Edge computing represents the next frontier. Processing data closer to its source minimizes latency, critical for IoT applications and real-time decisions. Our implementation reduces data transfer costs by 40% while improving response times for customer-facing applications. 


AI is set to transform education — what enterprise leaders can learn from this development

While AI tools show immense promise in addressing resource constraints, their adoption raises broader questions about the role of human connection in learning. Which brings us back to Unbound Academy. Students will spend two hours online each school morning working through AI-driven lessons in math, reading, and science. Tools like Khanmigo and IXL will personalize the instruction and analyze progress, adjusting the difficulty and content in real-time to optimize learning outcomes. The Charter application asserts that “this ensures that each student is consistently challenged at their optimal level, preventing boredom or frustration.” Unbound Academy’s model significantly reduces the role of human teachers. Instead, human “guides” provide emotional support and motivation while also leading workshops on life skills. What will students lose by spending most of their learning time with AI instead of human instructors, and how might this model reshape the teaching profession? The Unbound Academy model is already used in several private schools and the results they have obtained are used to substantiate the advantages it claims. ... For any of this to happen, the industry needs action that matches the rhetoric.


6 ways continuous learning can advance your career

Joys said thinking critically is about learning how a new idea or innovation might be translated into the current organizational context. "At the end of the day, the company is writing a paycheck for you," he said. "Think about how new stuff provides business value." Joys said professionals also need to ensure the benefits of the things they introduce through their learning processes are tracked and traced. "That's about measuring those efforts to ensure you can say, 'Here's a new piece of technology. Here's how we'll measure how this technology lines up with our corporate strategy and vision.'" ... Worsley told ZDNET he likes to learn on the job rather than acquire new knowledge in the classroom. "I'm not a bookish person. I don't go out and read. I recognize that I need to learn specific things because I've got a problem to solve," he said. "I'll learn about it, get the right people talking, and get the solutions underway. Tell me something's impossible and I'll tell you it's not." ... Keith Woolley, chief digital and information officer at the University of Bristol, said the great thing about his job is that it's like a hobby. "I'm naturally interested in what I do. So, I read things around me without realizing I'm consuming other information," he said. "If you're excited about what you do, learning comes naturally because it's a genuine interest. Then learning happens when you don't expect it."



Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer

Daily Tech Digest - November 19, 2024

AI-driven software testing gains more champions but worries persist

"There is a clear need to align quality engineering metrics with business outcomes and showcase the strategic value of quality initiatives to drive meaningful change," the survey's team of authors, led by Jeff Spevacek of OpenText, stated. "On the technology front, the adoption of newer, smarter test automation tools has driven the average level of test automation to 44%. However, the most transformative trend this year is the rapid adoption of AI, particularly Gen AI, which is set to make a huge impact." ... While AI offers great promise as a quality and testing tool, the study said there are "significant challenges in validating protocols, AI models, and the complexity of validation of all integrations. Currently, many organizations are struggling to implement comprehensive test strategies that ensure optimized coverage of critical areas. However, looking ahead, there is a strong expectation that AI will play a pivotal role in addressing these challenges and enhancing the effectiveness of testing activities in this domain." The key takeaway point from the research is that software quality engineering is rapidly evolving: "Once defined as testing human-written software, it has now evolved with AI-generated code."


How IAM Missteps Cause Data Breaches

Here’s where it gets complicated. Implementing least privilege requires an application’s requirements specifications to be available on demand with details of the hierarchy and context behind every interconnected resource. Developers rarely know exactly which permissions each service needs. For example to perform a read on an S3 bucket, we also need permissions to list contents of the S3 bucket. ... This is where we begin to be reactive and apply tools that scan for misconfigurations. Tools like AWS IAM Access Analyzer or Google Cloud’s IAM recommender are valuable for identifying risky permissions or potential overreach. However, if these tools become the primary line of defense, they can create a false sense of security. Most permission-checking tools are designed to analyze permissions at a point in time, often flagging issues after permissions are already in place. This reactive approach means that misconfigurations are only addressed after they occur, leaving systems vulnerable until the next scan. ... The solution lies in rethinking the way in which we wire up these relationships in the first place. Let’s take a look at two very simple pieces of code that both expose an API with a route to return a pre-signed URL from a cloud storage bucket.


Explainable AI: A question of evolution?

Inexplicable black boxes lead back to the bewitchment of the Sorting Hat; with real life tools we need to know how their decisions are made. As for the human-in-the-loop on whom we are pinning so much, if they are to step in and override AI decisions the humans better be on more than just speaking terms with their tools. Explanation is their job description. And it’s where the tools are used by the state to make decisions about us, our lives, liberty and livelihoods, that the need for explanation is greatest. Take a policing example. Whether or not drivers understand them we’ve been rubbing along with speed cameras for decades. What will AI-enabled road safety tools look and sound and think like? If they’re on speaking terms with our in-car telematics they’ll know what we’ve been up to behind the wheel for the last year not just the last mile. Will they be on speaking terms with juries, courts and public inquiries, reconstructing events that took place before they were even invented, together with all the attendant sounds, smells and sensation rather than just pics and stats? Much depends on the type of AI involved but even Narrow AI has given the police new reach like remote biometrics. 


Rethinking Documentation for Agile Teams

Documentation doesn’t need to be a separate task or deliverable to complete. During every meeting or asynchronous interaction, you can organically create documentation by using a virtual whiteboard to take notes, create visuals, and complete activities. ... Look for tools that can help you build and maintain your technical documentation with less effort. Modern visual collaboration solutions like Lucid offer advanced features to streamline documentation. These solutions can automatically generate various diagrams such as flowcharts, ERDs, org charts, and UML diagrams directly from your data. Some even incorporate AI assistance to help build and optimize diagrams. By using automation, teams can significantly reduce errors commonly associated with the manual creation of documentation. Another advantage of these platforms is the ability to link your data sources directly to your documents. This integration ensures your documentation stays up to date automatically, without requiring additional effort. What's more, advanced visual collaboration solutions integrate with project management tools like Jira and Azure DevOps. This integration allows teams to seamlessly share visuals between their chosen platforms, saving time and effort in keeping information synchronized across their environment.


Succeeding with observability in the cloud

The complexity of modern cloud environments amplifies the need for robust observability. Cloud applications today are built upon microservices, RESTful APIs, and containers, often spanning multicloud and hybrid architectures. This interconnectivity and distribution introduce layers of complexity that traditional monitoring paradigms struggle to capture. Observability addresses this by utilizing advanced analytics, artificial intelligence, and machine learning to analyze real-time logs, traces, and metrics, effectively transforming operational data into actionable insights. One of observability’s core strengths is its capacity to provide a continuous understanding of system operations, enabling proactive management instead of waiting for failures to manifest. Observability empowers teams to identify potential issues before they escalate, shifting from a reactive troubleshooting stance to a proactive optimization mindset. This capability is crucial in environments where systems must scale instantly to accommodate fluctuating demands while maintaining uninterrupted service.


How to Reduce VDI Costs

The onset of widespread remote work made the strategy much more prevalent, given that many organizations already had VDI infrastructure and experience. Due to its architectural design, infrastructure requirements scale more or less linearly with usage. But that means most organizations are often upside-down in their VDI investment — given that the costs are significant — and it seems that both practitioners and users have disdain for the experience. ... Maintaining VDI can be costly due to the need for patch management, hardware upgrades and support for end-user issues. An enterprise browser eliminates maintenance costs associated with traditional VDI systems because it requires no additional hardware. It also lowers administrative costs by centralizing controls within the browser, which reduces the need for multiple security tools and streamlines policy management. ... VDI solutions and their back-end systems can have substantial licensing fees, including the VDI platform and any extra licenses for the operating systems and apps used in VDI sessions. An enterprise browser can reduce the need for VDI by 80% to 90%, saving money on licensing costs. ... Ensuring secure and compliant endpoint interactions within a VDI session often requires additional endpoint controls and management solutions. 


Quantum computing: The future just got faster

Quantum computing holds promise for breakthroughs in many different industries. For example, scientists could use this technology to improve drug research by remodeling complex molecules and interactions that were previously computationally prohibitive. Complex optimization problems, like those encountered in logistics and supply chain management, could see solutions that drastically reduce costs and improve efficiency. Quantum computers could revolutionize cryptography by rapidly solving mathematical problems that underpin current encryption methods, posing both opportunities and significant security challenges. Sure, logistics and molecular simulations might sound far off for us regular folks, but there are applications that are right around the corner. For example, quantum computing could allow marketers to quickly analyze and process vast amounts of consumer data to identify trends, optimize ad placements, and tailor campaigns in real-time. While traditional data analysis might take hours or days to sift through customer preferences, a quantum computer could potentially complete this analysis in minutes, providing marketers with insights to adjust strategies almost instantaneously.


Why AI alone can’t protect you from sophisticated email threats

The battle between AI-based social engineering and AI-powered security measures is an ongoing one. Sophisticated attackers may develop techniques to evade AI detection, such as using ever more subtle and contextually accurate language, but security tools will then adapt to this, putting the pressure back on the attackers. So while AI-based behavioural analysis is a powerful tool in the fight against sophisticated social engineering attacks, it is most effective when used within a multi-layered defence strategy that includes security awareness training and other security measures. ... Alternative strategies for CISOs to consider include integrating AI and machine learning into the email security platform. AI/ML can analyse vast amounts of data in real time to identify anomalies and malicious patterns and respond accordingly. Behavioural analytics help detect unusual activities and patterns that indicate potential threats. ... Ensuring the security of email communications, especially with the involvement of third-party vendors, requires a comprehensive approach that is based both on security due diligence of the partner and effective security tools. Before engaging with any third party, an organisation should conduct a background check and security assessment.


Shortsighted CEOs leave CIOs with increasing tech debt

There’s a delicate balance between short- and long-term IT goals. A lot of the current focus with AI projects is to cut costs and drive efficiencies, but organizations also need to think about longer-term innovation, says Taylor Brown, co-founder and COO of Fivetran, vendor of a data management platform. “Every business, at some scale, is based on the decision of, ‘Do I continue to invest to make my product better and update it, or do I just keep driving the revenue that I have out of the product that I have?’” he says. “A lot of companies face this, and if you want to stay relevant, you want to compete and invest in innovation.” There are some companies that can probably survive by not thinking about long-term innovation, but they are few and far between, Brown says. “If you’re a technology company, then absolutely, you have to constantly be thinking about innovation, unless you have some crazy lock-in,” he adds. “In order to win new customers, you have to keep innovating.” Some IT leaders, however, aren’t convinced about the IBM report’s focus on IT shortcuts vs. innovation. IT spending is driven more by a desire to enable business goals, such as growth, and managing risks, including cyberattacks, says Yvette Kanouff, partner at JC2 Ventures, a tech-focused venture capital firm.


Musk’s anticipated cost-cutting hacks could weaken American cybersecurity

Although it’s too soon to predict what cybersecurity regulations DOGE might affect, experts say Musk might, at minimum, seek to strip regulatory power from agencies that align with some of his business interests, weakening their cybersecurity requirements or recommended practices in the process. Musk’s effort dovetails with what experts have already said: there is a high likelihood that the Trump administration will move to eliminate cybersecurity regulations. A landmark Supreme Court decision this summer that casts doubt on the future of all expert agency regulations reinforces this deregulatory direction. ... Even if Musk and the DOGE effort were to succeed in hacking back a significant number of regulations, experts say it won’t come easy. “One doesn’t know how enduring their relationship will be, nor how much of it is just going to be talk, nor how much opposition there might be in the state generally,” Tony Yates, former Professor of Economics at Birmingham University in the UK and a former senior advisor to the Bank of England, tells CSO. “The US has lots of checks and balances, many of which aren’t working as well as they used to,” he says. “But they’re still not entirely absent. So, it’s really hard to predict.”



Quote for the day:

“Success is not so much what we have, as it is what we are.” -- Jim Rohn

Daily Tech Digest - July 18, 2024

The Critical Role of Data Cleaning

Data cleaning is a crucial step that eliminates irrelevant data, identifies outliers and duplicates, and fixes missing values. It involves removing errors, inconsistencies, and, sometimes, even biases from raw data to make it usable. While buying pre-cleaned data can save resources, understanding the importance of data cleaning is still essential. Inaccuracies can significantly impact results. In many cases, before the removal of low-value data, the rest is still hardly usable. Cleaning works as a filter, ensuring that data passes through to the next step, which is more refined and relevant to your goals. ... At its core, data cleaning is the backbone of robust and reliable AI applications. It helps guard against inaccurate and biased data, ensuring AI models and their findings are on point. Data scientists depend on data cleaning techniques to transform raw data into a high-quality, trustworthy asset. ... Interestingly, LLMs that have been properly trained on clean data can play a significant role in the data cleaning process itself. Their advanced capabilities enable LLMs to automate and enhance various data cleaning tasks, making the process more efficient and effective.


What Is Paravirtualization?

Paravirtualization builds upon traditional virtualization by offering extra services, improved capabilities or better performance to guest operating systems. With traditional virtualization, organizations abstract the underlying resources via virtual machines to the guest so they can run them as is, says Greg Schulz, founder of the StorageIO Group, an IT industry analyst consultancy. However, those virtual machines use all of the resources assigned to them, meaning there is a great deal of idle time, even though it doesn’t appear so, according to Kalvar. Paravirtualization uses software instruction to dynamically size and resize those resources, Kalvar says, turning VMs into bundles of resources. They are managed by the hypervisor, a software component that manages multiple virtual machines in a computer. ... One of the biggest advantages of paravirtualization is that it is typically more efficient than full virtualization because the hypervisor can closely manage and optimize resources between different operating systems. Users can manage the resources they consume on a granular basis. “I’m not buying an hour of a server, I’m buying seconds of resource time,” Kalvar says. 


Leaked Access Keys: The Silent Revolution in Cloud Security

The challenge for service accounts is that MFA does not work, and network-level protection (IP filtering, VPN tunneling, etc.) is not consequently applied, primarily due to complexity and costs. Thus, service account key leaks often enable hackers to access company resources. While phishing is unusual in the context of service accounts, leakages are frequently the result of developers posting them (unintentionally) online, often in combination with code fragments that unveil the user to whom they apply. ... Now, Google has changed the game with its recent policy change. If an access key appears in a public GitHub repository, GCP deactivates the key, no matter whether applications crash. Google's announcement marks a shift in the risk and priority tango. Gone are the days when patching vulnerabilities could take days or weeks. Welcome to the fast-paced cloud era. Zero-second attacks after credential leakages demand zero-second fixing. Preventing an external attack becomes more important than avoiding crashing customer applications – that is at least Google's opinion. 


Juniper advances AI networking software with congestion control, load balancing

On the load balancing front, Juniper has added support for dynamic load balancing (DLB) that selects the optimal network path and delivers lower latency, better network utilization, and faster job completion times. From the AI workload perspective, this results in better AI workload performance and higher utilization of expensive GPUs, according to Sanyal. “Compared to traditional static load balancing, DLB significantly enhances fabric bandwidth utilization. But one of DLB’s limitations is that it only tracks the quality of local links instead of understanding the whole path quality from ingress to egress node,” Sanyal wrote. “Let’s say we have CLOS topology and server 1 and server 2 are both trying to send data called flow-1 and flow-2, respectively. In the case of DLB, leaf-1 only knows the local links utilization and makes decisions based solely on the local switch quality table where local links may be in perfect state. But if you use GLB, you can understand the whole path quality where congestion issues are present within the spine-leaf level.”


Impact of AI Platforms on Enhancing Cloud Services and Customer Experience

AI platforms enable businesses to streamline operations and reduce costs by automating routine tasks and optimizing resource allocation. Predictive analytics, powered by AI, allows for proactive maintenance and issue resolution, minimizing downtime and ensuring continuous service availability. This is particularly beneficial for industries where uninterrupted access to cloud services is critical, such as finance, healthcare, and e-commerce. ... AI platforms are not only enhancing backend operations but are also revolutionizing customer interactions. AI-driven customer service tools, such as chatbots and virtual assistants, provide instant support, personalized recommendations, and seamless user experiences. These tools can handle a wide range of customer queries, from basic information requests to complex problem-solving, thereby improving customer satisfaction and loyalty. The efficiency and round-the-clock availability of AI-driven tools make them invaluable for businesses. By the year 2025, it is expected that AI will facilitate around 95% of customer interactions, demonstrating its growing influence and effectiveness.


2 Essential Strategies for CDOs to Balance Visible and Invisible Data Work Under Pressure

Short-termism under pressure is a common mistake, resulting in an unbalanced strategy. How can we, as data leaders, successfully navigate such a scenario? “Working under pressure and with limited trust from senior management can force first-time CDOs to commit to an unbalanced strategy, focusing on short-term, highly visible projects – and ignore the essential foundation.” ... The desire to invest in enabling topics stems from the balance between driving and constraining forces. The senior management tends to ignore enabling topics because they rarely directly contribute to the bottom line; they can be a black box to a non-technical person and require multiple teams to collaborate effectively. On the other hand, Anne knew that the same people eagerly anticipated the impact of advanced analytics such as GenAI and were worried about potential regulatory risks. With the knowledge of the key enabling work packages and the motivating forces at play, Anne has everything she needs to argue for and execute a balanced long-term data strategy that does not ignore the “invisible” work required.


Gen AI Spending Slows as Businesses Exercise Caution

Generative AI has advanced rapidly over the past year, and organizations are recognizing its potential across business functions. But businesses have now taken a cautious stance regarding gen AI adoption due to steep implementation costs and concerns related to hallucinations. ... This trend reflects a broader shift away from the AI hype, and while businesses acknowledge the potential of this technology, they are also wary of the associated risks and costs, according to Michael Sinoway, CEO, Lucidworks. "The flattened spending suggests a move toward more thoughtful planning. This approach ensures AI adoption delivers real value, balancing competitiveness with cost management and risk mitigation," he said. ... Concerns regarding implementation costs, accuracy and data security have increased considerably in 2024. The number of business leaders with concerns related to implementation costs has increased 14-fold and those related to response accuracy have grown fivefold. While concerns about data security have increased only threefold, it remains the biggest worry.


CIOs are stretched more than ever before — and that’s a good thing

“Many CIOs have built years of credibility and trust by blocking and tackling the traditional responsibilities of the role,” she adds. “They’re now being brought to the conversation as business leaders to help the organization think through transformational priorities because they’re functional experts like any other executive in the C-suite.” ... “Boards want technology to improve the top and bottom line, which can be a tough balance, even if it’s one that CIOs are getting used to managing,” says Nash Squared’s White. “On the one hand, they’re being asked to promote innovation and help generate revenue, and on the other, they’re often charged with governance and security, too.” The importance of technology will only continue to increase going forward as well. Gen AI, for example, will make it possible to boost productivity while reducing costs. CyberArk’s Grossman expects the central role of digital leaders in exploiting these emerging technologies will mean high-level CIOs will be even more important in the future.


What Is a Sovereign Cloud and Who Truly Benefits From It?

A sovereign cloud is a cloud computing environment designed to help organizations comply with regulatory rules established by a particular government. This often entails ensuring that data stored within the cloud environment remains within a specific country. But it can also involve other practices, as we explain below. ... For one thing, cost. In general, cloud computing services on a sovereign cloud cost more than their equivalents on a generic public cloud. The exact pricing can vary widely depending on a number of factors, such as which cloud regions you select and which types of services you use, but in general, expect to pay a premium of at least 15% to use a sovereign cloud. A second challenge of using sovereign clouds is that in some cases your organization must undergo a vetting process to use them because some sovereign cloud providers only make their solutions available to certain types of organizations — often, government agencies or contractors that do business with them. This means you can't just create a sovereign cloud account and start launching workloads in a matter of minutes, as you could in a generic public cloud.


Securing datacenters may soon need sniffer dogs

So says Len Noe, tech evangelist at identity management vendor CyberArk. Noe told The Register he has ten implants – passive devices that are observable with a full body X-ray, but invisible to most security scanners. Noe explained he's acquired swipe cards used to access controlled premises, cloned them in his implants, and successfully entered buildings by just waving his hands over card readers. ... Noe thinks hounds are therefore currently the only reliable means of finding humans with implants that could be used to clone ID cards. He thinks dogs should be considered because attackers who access datacenters using implants would probably walk away scot-free. Noe told The Register that datacenter staff would probably notice an implant-packing attacker before they access sensitive areas, but would then struggle to find grounds for prosecution because implants aren't easily detectable – and even if they were the information they contain is considered medical data and is therefore subject to privacy laws in many jurisdictions.



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree