Showing posts with label cryptojacking. Show all posts
Showing posts with label cryptojacking. Show all posts

Daily Tech Digest - March 17, 2026


Quote for the day:

"Make heroes out of the employees who personify what you want to see in the organization." -- Anita Roddick


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 20 mins • Perfect for listening on the go.


How organizations can make a successful transition to Post-Quantum Cryptography (PQC)

In the article "How Organizations Can Make a Successful Transition to Post-Quantum Cryptography (PQC)," the author outlines a strategic framework for businesses to defend against the impending "Harvest Now, Decrypt Later" (HNDL) threat. This tactic involves malicious actors exfiltrating sensitive data today to decrypt it once powerful quantum computers become viable. To counter this, organizations must first establish a top-down strategy that prioritizes a hybrid cryptographic approach. By combining classical, proven algorithms like ECDH with new NIST-standardized PQC algorithms such as ML-KEM, companies create a safety net against unforeseen vulnerabilities in emerging standards. A critical foundational step is the creation of a comprehensive "Crypto-Bill of Materials" (CBOM) to inventory all cryptographic assets and prioritize "crown jewels" like financial transactions and intellectual property. Furthermore, enterprises should codify these requirements into their procurement policies to prevent the accumulation of further cryptographic debt during new software acquisitions. Finally, the article stresses the importance of assigning clear, cross-functional ownership to ensure accountability across IT, legal, and supply chain departments. By treating the PQC transition as a long-term strategic initiative rather than a simple technical patch, CIOs can ensure their organizations remain resilient and protect the long-term integrity of their most vital data.


Who’s in the data-center space race?

In the article "Who’s in the data-center space race?" on Network World, Maria Korolov explores the ambitious frontier of orbital computing and the major players vying for celestial dominance. Tech giants like SpaceX and Google lead the charge, with Elon Musk’s SpaceX proposing a massive constellation of one million satellites for xAI workloads, while Google’s Project Suncatcher aims to deploy solar-powered tensor processing units in orbit. These initiatives seek to capitalize on abundant solar energy and the natural cooling of space, bypassing terrestrial power constraints and environmental hurdles. Startups like Lonestar are even targeting lunar data storage, while European and Chinese consortiums plan to establish extensive AI training networks by 2030. Despite the promise of high-speed optical downlinks and lower latency, significant obstacles remain, including the extreme costs of orbital launches and the necessity of radiation-hardening sensitive silicon chips. Experts predict that economic feasibility hinges on reducing launch prices to under $200 per kilogram, a milestone expected by the mid-2030s. Ultimately, this space race represents a transformative shift in infrastructure, moving beyond terrestrial limitations to build a decentralized, planet-scale intelligence backbone that could redefine global connectivity and artificial intelligence processing.


When Code Becomes Cheap, Engineering Becomes Governance

In the article "When Code Becomes Cheap, Engineering Becomes Governance" on DevOps.com, Alan Shimel discusses how generative AI is fundamentally recalibrating the software development lifecycle by making the production of code almost instantaneous and effectively "cheap." As AI agents handle the manual labor of writing syntax, the traditional bottleneck of code authorship is vanishing, creating a significant paradox: while output volume explodes, risks associated with security, technical debt, and architectural coherence multiply. Consequently, the core discipline of software engineering is transitioning from a focus on creation to a focus on governance. Engineering teams must now prioritize the curation, verification, and oversight of automated output to prevent unmanageable complexity. This new paradigm demands that developers act as strategic supervisors or "building inspectors," implementing rigorous policy enforcement and guardrails to ensure system integrity. Shimel argues that in an era of abundant code, human expertise is most valuable for high-level decision-making and risk management. Ultimately, success depends on an organization's ability to evolve its culture, treating governance as the essential backbone of sustainable, secure software delivery. This evolution ensures that while machines generate syntax, humans remain responsible for the stability and comprehensibility of the overall system.

On March 6, 2026, the Trump Administration unveiled its "Cyber Strategy for America," an aggressive framework emphasizing offensive deterrence, deregulation, and the rapid adoption of AI-powered security measures. While the seven-page document outlines six core pillars—including shaping adversary behavior and hardening critical infrastructure—experts at Biometric Update highlight a significant "identity gap" within the overarching plan. Although the strategy explicitly prioritizes emerging technologies like blockchain, post-quantum cryptography, and autonomous agentic AI, it notably fails to establish a centralized national digital identity strategy or a unified identity assurance framework. This omission is particularly striking as identity fraud and synthetic personas increasingly fuel transnational cybercrime, financial scams, and voter suppression fears. Critics argue that treating digital identity as an afterthought rather than a front-line defense leaves both government and the private sector navigating a fragmented regulatory environment. Interestingly, this lack of focus contrasts with concurrent reports from the Treasury Department, which position digital identity as a critical security layer for modern digital assets. Ultimately, while the strategy successfully shifts the national posture toward risk imposition and technological dominance, it remains an incomplete doctrine by leaving the foundational challenge of identity verification unresolved in an era of sophisticated AI-generated deception.


Practical DevOps leadership Without the Drama

In the article "Practical DevOps Leadership Without the Drama" on the DevOps Oasis blog, the author argues that effective leadership in a technical environment is less about "mystical" management and more about grounded problem-solving and unblocking teams. The piece outlines several pragmatic pillars to maintain a high-performing, low-stress culture. First, it emphasizes starting every initiative by clearly defining the problem to avoid "hobby projects" and align with DORA metrics. Second, it champions visibility through flow, risk, and ownership tracking, suggesting that "red is a color, not a career-limiting event" to surface issues early. Third, leadership involves setting standards that remove repetitive decisions rather than autonomy, using tools like Kubernetes baselines to make the "safe path the easy path." The article also stresses that incident leadership requires a calm, structured routine where coordination is prioritized over individual heroics. Finally, it highlights the importance of a systematic approach to feedback, intentional hiring for systems thinking, and the courage to use guardrails—such as policy-as-code—to prevent predictable operational pain. Ultimately, the post serves as a playbook for building resilient teams that ship quality code without sacrificing sleep or psychological safety.


Rocketlane CEO: AI requires a structural reset of professional SaaS

In the Techzine article, Rocketlane CEO Srikrishnan Ganesan argues that the rise of artificial intelligence necessitates a fundamental "structural reset" of the professional SaaS industry. He contends that simply layering AI features onto existing platforms is a superficial approach that fails to capture the technology's true potential. Instead, the next generation of SaaS must transition from being mere "systems of record" to "systems of action" where AI agents actively execute tasks—such as automated documentation, data transformation, and project management—rather than just tracking them. This shift is particularly impactful for professional services and customer onboarding, where traditional hourly billing models are becoming obsolete in favor of value-based outcomes and fixed fees. Ganesan emphasizes that by delegating routine configurations to AI, human teams can evolve into "orchestrators" focused on high-level strategy and ROI. This transformation enables vendors to offer more scalable, "white-glove" experiences while significantly reducing delivery costs. Ultimately, the article suggests that organizations re-architecting their service models around autonomous capabilities will define the next operating model, while those clinging to legacy, labor-intensive frameworks risk being outpaced by AI-native competitors that redefine the speed of service delivery.


Cryptojackers Lurk in Open Source Clouds

The article "Cryptojackers Lurk in Open Source Clouds" from CACM News explores the growing threat of host-based cryptojacking, where attackers infiltrate Linux cloud environments to surreptitiously mine cryptocurrency. Unlike traditional PC-based malware, cloud-level cryptojacking is highly lucrative because a single entry point can grant access to millions of processors. Attackers typically evade detection by "throttling" their resource usage to blend into background kernel noise and utilizing techniques like program-identification randomization to bypass standard monitoring. This structural complexity often obscures accountability, enabling malicious code to persist even through manual scans. To combat these sophisticated vulnerabilities, researchers introduced CryptoGuard, an open-source framework that leverages deep learning to integrate detection and automated remediation. By tracking specific time-series patterns in kernel-space system calls rather than relying on easily obfuscated process IDs, CryptoGuard can pinpoint scheduler tampering and execute periodic automated erasures to thwart reinfection. This represents a vital shift toward proactive defense, moving beyond simple alerting to real-time, scale-ready intervention. Ultimately, the article argues that restoring visibility in dynamic cloud infrastructures requires such automated, high-fidelity solutions to empower security teams against innovatively hidden cyber threats that continue to exploit vast, under-monitored computational resources.

The article "A million hard drives go offline daily: the massive data waste problem" on Data Center Dynamics highlights a critical yet often overlooked sustainability crisis within the global technology industry. Each year, tens of millions of hard disk drives reach the end of their functional lifespan, yet a staggering number are shredded rather than repurposed. This practice, often driven by rigid security compliance standards like NIST 800-88, leads to an environmental "tsunami" of e-waste, with an estimated one million drives being destroyed every single day. The destruction of these devices not only creates massive amounts of physical waste but also results in the permanent loss of precious, non-renewable raw materials such as neodymium, gold, and copper, valued at hundreds of millions of dollars annually. To combat this, the piece advocates for a shift toward a circular economy model, emphasizing secure data sanitization—software-based wiping—over physical destruction. By adopting "delete, don't destroy" policies and utilizing robotic disassembly for component recovery, the industry could significantly reduce its carbon footprint. Ultimately, the article calls for a collaborative effort between tech giants, regulators, and data center operators to prioritize resource recovery and sustainable innovation to protect the planet’s future.
In the article "Green IT Meets Database Engineering," Craig S. Mullins explores the critical intersection of database administration and environmental sustainability, arguing that efficient data architecture is essential for reducing an organization's energy footprint. As data centers consume a significant portion of global electricity, DBAs must transition toward "carbon-aware" engineering by addressing "data sprawl"—the accumulation of unused tables and redundant records that inflate storage and cooling demands. The author emphasizes that fundamental practices like proper schema normalization, appropriate data typing, and rigorous index discipline are not just performance boosters but key drivers for energy conservation. Efficient SQL coding further reduces CPU cycles and I/O operations, directly cutting power usage. Furthermore, the shift toward cloud-native environments requires precise "right-sizing" to prevent energy waste from overprovisioned resources. By integrating these green principles into the architectural lifecycle, database engineers can align cost-effectiveness with corporate social responsibility. Ultimately, the piece posits that sustainable data management is rooted in disciplined engineering, where every optimized query and trimmed dataset contributes to a more ecologically responsible digital ecosystem without sacrificing growth or technical excellence.


What Africa’s shared data centres can teach the rest of EMEA

In the article "What Africa’s shared data centres can teach the rest of EMEA" on Data Centre Review, Ryan Holmes explores how African nations are leapfrogging traditional IT evolution by bypassing legacy infrastructure in favor of local, shared colocation platforms. As demand for AI-driven workloads and real-time processing surges, organizations across the continent are prioritizing proximity to minimize latency and ensure data sovereignty. This shift mirrors earlier technological breakthroughs like mobile money, allowing emerging markets to avoid the high costs and risks associated with self-managed enterprise servers or offshore hyperscale dependency. The author highlights that shared data centers offer a pragmatic solution for governments and businesses to meet strict residency regulations while maintaining high operational resilience. Furthermore, the absence of major hyperscalers in many African regions has fostered a robust ecosystem of professionally managed, carrier-neutral facilities that provide a cost-effective, opex-based alternative to capital-intensive builds. Ultimately, Africa’s move toward localized, resilient, and collaborative infrastructure provides a vital blueprint for the rest of EMEA, demonstrating that digital independence and performance are best achieved through partnership and strategic proximity rather than isolated ownership or total reliance on global giants.

Daily Tech Digest - November 21, 2024

Building Resilient Cloud Architectures for Post-Disaster IT Recovery

A resilient cloud architecture is designed to maintain functionality and service quality during disruptive events. These architectures ensure that critical business applications remain accessible, data remains secure, and recovery times are minimized, allowing organizations to maintain operations even under adverse conditions. To achieve resilience, cloud architectures must be built with redundancy, reliability, and scalability in mind. This involves a combination of technologies, strategies, and architectural patterns that, when applied collect ... Cloud-based DRaaS solutions allow organizations to recover critical workloads quickly by replicating environments in a secondary cloud region. This ensures that essential services can be restored promptly in the event of a disruption. Automated backups, on the other hand, ensure that all extracted data is continually saved and stored in a secure environment. Using regular snapshots can also provide rapid restoration points, giving teams the ability to revert systems to a pre-disaster state efficiently. ... Infrastructure as code (IaC) allows for the automated setup and configuration of cloud resources, providing a faster recovery process after an incident. 


Agile Security Sprints: Baking Security into the SDLC

Making agile security sprints effective requires organizations to embrace security as a continuous, collaborative effort. The first step? Integrating security tasks into the product backlog right alongside functional requirements. This approach ensures that security considerations are tackled within the same sprint, allowing teams to address potential vulnerabilities as they arise — not after the fact when they're harder and more expensive to fix. ... By addressing security iteratively, teams can continuously improve their security posture, reducing the risk of vulnerabilities becoming unmanageable. Catching security issues early in the development lifecycle minimizes delays, enabling faster, more secure releases, which is critical in a competitive development landscape. The emphasis on collaboration between development and security teams breaks down silos, fostering a culture of shared responsibility and enhancing the overall security-consciousness of the organization. Quickly addressing security issues is often far more cost-effective than dealing with them post-deployment, making agile security sprints a necessary choice for organizations looking to balance speed with security.


The new paradigm: Architecting the data stack for AI agents

With the semantic layer and historical data-based reinforcement loop in place, organizations can power strong agentic AI systems. However, it’s important to note that building a data stack this way does not mean downplaying the usual best practices. This essentially means that the platform being used should ingest and process data in real-time from all major sources, have systems in place for ensuring the quality/richness of the data and then have robust access, governance and security policies in place to ensure responsible agent use. “Governance, access control, and data quality actually become more important in the age of AI agents. The tools to determine what services have access to what data become the method for ensuring that AI systems behave in compliance with the rules of data privacy. Data quality, meanwhile, determines how well an agent can perform a task,” Naveen Rao, VP of AI at Databricks, told VentureBeat. ... “No agent, no matter how high the quality or impressive the results, should see the light of day if the developers don’t have confidence that only the right people can access the right information/AI capability. This is why we started with the governance layer with Unity Catalog and have built our AI stack on top of that,” Rao emphasized.


Enhancing visibility for better security in multi-cloud and hybrid environments

The number one challenge for infrastructure and cloud security teams is visibility into their overall risk–especially in complex environments like cloud, hybrid cloud, containers, and Kubernetes. Kubernetes is now the tool of choice for orchestrating and running microservices in containers, but it has also been one of the last areas to catch speed from a security perspective, leaving many security teams feeling caught on their heels. This is true even if they have deployed admission control or have other container security measures in place. Teams need a security tool in place that can show them who is accessing their workloads and what is happening in them at any given moment, as these environments have an ephemeral nature to them. A lot of legacy tooling just has not kept up with this demand. The best visibility is achieved with tooling that allows for real-time visibility and real-time detection, not point-in-time snapshotting, which does not keep up with the ever-changing nature of modern cloud environments. To achieve better visibility in the cloud, automate security monitoring and alerting to reduce manual effort and ensure comprehensive coverage. Centralize security data using dashboards or log aggregation tools to consolidate insights from across your cloud platforms.


How Augmented Reality is Shaping EV Development and Design

Traditionally, prototyping has been a costly and time-consuming stage in vehicle development, often requiring multiple physical models and extensive trial and error. AR is disrupting this process by enabling engineers to create and test virtual prototypes before building physical ones. Through immersive visualizations, teams can virtually assess design aspects like fit, function, and aesthetics, streamlining modifications and significantly shortening development cycles. ... One of the key shifts in EV manufacturing is the emphasis on consumer-centric design. EV buyers today expect not just efficiency but also vehicles that reflect their lifestyle choices, from customizable interiors to cutting-edge tech features. AR offers manufacturers a way to directly engage consumers in the design process, offering a virtual showroom experience that enhances the customization journey. ... AR-assisted training is one frontier seeing a lot of adoption. By removing humans from dangerous scenarios while still allowing them to interact with those same scenarios, companies can increase safety while still offering practical training. In one example from Volvo, augmented reality is allowing first responders to assess damage on EV vehicles and proceed with caution.


Digital twins: The key to unlocking end-to-end supply chain growth

Digital twins can be used to model the interaction between physical and digital processes all along the supply chain—from product ideation and manufacturing to warehousing and distribution, from in-store or online purchases to shipping and returns. Thus, digital twins paint a clear picture of an optimal end-to-end supply chain process. What’s more, paired with today’s advances in predictive AI, digital twins can become both predictive and prescriptive. They can predict future scenarios to suggest areas for improvement or growth, ultimately leading to a self-monitoring and self-healing supply chain. In other words, digital twins empower the switch from heuristic-based supply chain management to dynamic and granular optimization, providing a 360-degree view of value and performance leakage. To understand how a self-healing supply chain might work in practice, let’s look at one example: using digital twins, a retailer sets dynamic SKU-level safety stock targets for each fulfillment center that dynamically evolve with localized and seasonal demand patterns. Moreover, this granular optimization is applied not just to inventory management but also to every part of the end-to-end supply chain—from procurement and product design to manufacturing and demand forecasting. 


Illegal Crypto Mining: How Businesses Can Prevent Themselves From Being ‘Cryptojacked’

Business leaders might believe that illegal crypto mining programs pose no risks to their operations. Considering the number of resources most businesses dedicate to cybersecurity, it might seem like a low priority in comparison to other risks. However, the successful deployment of malicious crypto mining software can lead to even more risks for businesses, putting their cybersecurity posture in jeopardy. Malware and other forms of malicious software can drain computing resources, cutting the life expectancy of computer hardware. This can decrease the long-term performance and productivity of all infected computers and devices. Additionally, the large amount of energy required to support the high computing power of crypto mining can drain electricity across the organization. But one of the most severe risks associated with malicious crypto mining software is that it can include other code that exploits existing vulnerabilities. ... While powerful cybersecurity tools are certainly important, there’s no single solution to combat illegal crypto mining. But there are different strategies that business leaders can implement to reduce the likelihood of a breach, and mitigating human error is among the most important. 


10 Most Impactful PAM Use Cases for Enhancing Organizational Security

Security extends beyond internal employees as collaborations with third parties also introduce vulnerabilities. PAM solutions allow you to provide vendors with time-limited, task-specific access to your systems and monitor their activity in real time. With PAM, you can also promptly revoke third-party access when a project is completed, ensuring no dormant accounts remain unattended. Suppose you engage third-party administrators to manage your database. In this case, PAM enables you to restrict their access based on a "need-to-know" basis, track their activities within your systems, and automatically remove their access once they complete the job. ... Reused or weak passwords are easy targets for attackers. Relying on manual password management adds another layer of risk, as it is both tedious and prone to human error. That's where PAM solutions with password management capabilities can make a difference. Such solutions can help you secure passwords throughout their entire lifecycle — from creation and storage to automatic rotation. By handling credentials with such PAM solutions and setting permissions according to user roles, you can make sure all the passwords are accessible only to authorized users. 


The Information Value Chain as a Framework for Tackling Disinformation

The information value chain has three stages: production, distribution, and consumption. Claire Wardle proposed an early version of this framework in 2017. Since then, scholars have suggested tackling disinformation through an economics lens. Using this approach, we can understand production as supply, consumption as demand, and distribution as a marketplace. In so doing, we can single out key stakeholders at each stage and determine how best to engage them to combat disinformation. By seeing disinformation as a commodity, we can better identify and address the underlying motivations ... When it comes to the disinformation marketplace, disinformation experts mostly agree it is appropriate to point the finger at Big Tech. Profit-driven social media platforms have understood for years that our attention is the ultimate gold mine and that inflammatory content is what attracts the most attention. There is, therefore, a direct correlation between how much disinformation circulates on a platform and how much money it makes from advertising. ... To tackle disinformation, we must think like economists, not just like fact-checkers, technologists, or investigators. We must understand the disinformation value chain and identify the actors and their incentives, obstacles, and motivations at each stage.


Why do developers love clean code but hate writing documentation?

In fast-paced development environments, particularly those adopting Agile methodologies, maintaining up-to-date documentation can be challenging. Developers often deprioritize documentation due to tight deadlines and a focus on delivering working code. This leads to informal, hard-to-understand documentation that quickly becomes outdated as the software evolves. Another significant issue is that documentation is frequently viewed as unnecessary overhead. Developers may believe that code should be self-explanatory or that documentation slows down the development process. ... To prevent documentation from becoming a second-class citizen in the software development lifecycle, Ferri-Beneditti argues that documentation needs to be observable, something that can be measured against the KPIs and goals developers and their managers often use when delivering projects. ... By offloading the burden of documentation creation onto AI, developers are free to stay in their flow state, focusing on the tasks they enjoy—building and problem-solving—while still ensuring that the documentation remains comprehensive and up-to-date. Perhaps most importantly, this synergy between GenAI and human developers does not remove human oversight. 



Quote for the day:

"The harder you work for something, the greater you'll feel when you achieve it." -- Unknown

Daily Tech Digest - February 01, 2024

Making the Leap From Data Governance to AI Governance

One of the AI governance challenges Regensburger is researching revolves around ensuring the veracity of outcomes, of the content that’s generated by GenAI. “It’s sort of the unknown question right now,” he says. “There’s a liability question on how you use…AI as a decision support tool. We’re seeing it in some regulations like the AI Act and President Biden’s proposed AI Bill Rights, where outcomes become really important, and it moves that into the governance sphere.” LLMs have the tendency to make things up out of whole cloth, which poses a risk to anyone who uses it. For instance, Regensburger recently asked an LLM to generate an abstract on a topic he researched in graduate school. “My background is in high energy physics,” he says. “The text it generated seemed perfectly reasonable, and it generated a series of citations. So I just decided to look at the citations. It’s been a while since I’ve been in graduate school. Maybe something had come up since then? “And the citations were completely fictitious,” he continues. “Completely. They look perfectly reasonable. They had Physics Review Letters. It had all the right formats. And at your first casual inspection it looked reasonable. 


Architecting for Industrial IoT Workloads: A Blueprint

The first step in an IIoT-enabled environment is to establish communication interfaces with the machinery. In this step, there are two primary goals: read data from machines (telemetry) and write data to machines Machines in a manufacturing plant can have legacy/proprietary communication interfaces and modern IoT sensors. Most industrial machines today are operated by programmable logic controllers (PLC). A PLC is an industrial computer ruggedized and adapted to control manufacturing processes—such as assembly lines, machines, and robotic devices — or any activity requiring high reliability, ease of programming and process fault diagnosis. However, PLCs provide limited connectivity interfaces with the external world over protocols like HTTP and MQTT, restricting external data reads (for telemetry) and writes (for control and automation). Apache PLC4X bridges this gap by providing a set of API abstractions over legacy and proprietary PLC protocols. PLC4X is an open-source universal protocol adapter for IIoT appliances that enables communication over protocols including, but not limited to, Siemens S7, Modbus, Allen Bradley, Beckhoff ADS, OPC-UA, Emerson, Profinet, BACnet and Ethernet.


6 user experience mistakes made for security and how to fix them

The challenge here is to communicate effectively with your non-experts in a way that they understand the “what” and “why” of cybersecurity. “The goal is to make it practical rather than condescending, manipulative, or punitive,” Sunshine says. “You need to take down that fear factor.” So long as people have the assurance that they can come clean and not be fired for that kind of mistake, they can help strengthen security by coming forward about problems instead of trying to cover them up. ... To achieve optimal results, you have to strike the right balance between the level of security required and the convenience of users. Much depends on the context. The bar is much higher for those who work with government entities, for example, than a food truck business, Sunshine says. Putting all the safeguards required for the most regulated industries into effect for businesses that don’t require that level of security introduces unnecessary friction. Failing to differentiate among different users and needs is the fundamental flaw of many security protocols that require everyone to use every security measure for everything.


5 New Ways Cyberthreats Target Your Bank Account

Deepfake technology, initially designed for entertainment, has evolved into a potent tool for cybercriminals. Through artificial intelligence and machine learning, these technologies fuel intricate social engineering attacks, enabling attackers to mimic trusted individuals with astonishing precision. This proficiency grants them access to critical data like banking credentials, resulting in significant financial repercussions. ... Modern phishing tactics now harness artificial intelligence to meticulously analyse extensive data pools, encompassing social media activities and corporate communications. This in-depth analysis enables the creation of highly personalised and contextually relevant messages, mimicking trusted sources like banks or financial institutions. This heightened level of customisation significantly enhances the credibility of these communications, amplifying the risk of recipients disclosing sensitive information, engaging with malicious links, or unwittingly authorising fraudulent transactions. ... Credential stuffing is a prevalent and dangerous method cybercriminals use to breach bank accounts. This attack method exploits the widespread practice of password reuse across multiple sites and services.


Italian Businesses Hit by Weaponized USBs Spreading Cryptojacking Malware

A financially motivated threat actor known as UNC4990 is leveraging weaponized USB devices as an initial infection vector to target organizations in Italy. Google-owned Mandiant said the attacks single out multiple industries, including health, transportation, construction, and logistics. "UNC4990 operations generally involve widespread USB infection followed by the deployment of the EMPTYSPACE downloader," the company said in a Tuesday report. "During these operations, the cluster relies on third-party websites such as GitHub, Vimeo, and Ars Technica to host encoded additional stages, which it downloads and decodes via PowerShell early in the execution chain." ... Details of the campaign were previously documented by Fortgale and Yoroi in early December 2023, with the former tracking the adversary under the name Nebula Broker. The infection begins when a victim double-clicks on a malicious LNK shortcut file on a removable USB device, leading to the execution of a PowerShell script that's responsible for downloading EMPTYSPACE (aka BrokerLoader or Vetta Loader) from a remote server via another intermedia PowerShell script hosted on Vimeo.


Understanding Architectures for Multi-Region Data Residency

A critical principle in the context of multi-region deployments is establishing clarity on truth and trust. While knowing the source of truth for a piece of data is universally important, it becomes especially crucial in multi-region scenarios. Begin by identifying a fundamental unit, an "atom," within which all related data resides in one region. This could be an organizational entity like a company, a team, or an organization, depending on your business structure. Any operation that involves crossing these atomic boundaries inherently becomes a cross-region scenario. Therefore, defining this atomic unit is essential in determining the source of truth for your multi-region deployment. In terms of trust, as different regions hold distinct data, communication between them becomes necessary. This could involve scenarios like sharing authentication tokens across regions. The level of trust between regions is a decision rooted in the specific needs and context of your business. Consider the geopolitical landscape if governments are involved, especially if cells are placed in regions with potentially conflicting interests.


Developing a Data Literacy Program for Your Organization

Before developing a data literacy program for an organization, it is crucial to conduct a comprehensive training needs assessment. This assessment helps in understanding the current level of data literacy within the organization and identifying areas that require improvement. It involves gathering information about employees’ existing knowledge, skills, and attitudes toward data analysis and interpretation. To conduct the needs assessment, different methods can be employed. Surveys, interviews, focus groups, or even analyzing existing data can provide valuable insights into employees’ proficiency levels and their specific learning needs. By involving various stakeholders, such as managers, department heads, and employees themselves, in this process, a holistic understanding of the organization’s requirements can be achieved. ... It is also beneficial to compare the program’s outcomes against predefined benchmarks or industry standards. This allows organizations to benchmark their progress against other similar initiatives and identify areas where further improvements are necessary. Overall, continuously evaluating the effectiveness of a data literacy program helps organizations understand its impact on individuals’ capabilities and organizational performance.


Women In Architecture: Early Insights and Reflections

The question of why there so few women in architecture is a key one in our minds. Rather than dwelling on the negative, the conversations focus on identifying the root causes to help us move into action effectively. I have learned that the answer to this question is incredibly nuanced and layered, with many interrelated factors. Some root causes for fewer women in architecture draw from the macro level context, including a similar set of challenges experienced by women in technology. However, one of the biggest contributors is the architecture profession itself and how it is presented. This has been a hard truth that has asserted itself as a common thread throughout the conversations. For example, the lack of clarity regarding the role and value proposition of architecture, often perceived as abstract, technical, and unattainable, poses a substantial barrier. ... However, there is a powerful correspondence between the momentum for more diversity in architecture and exactly what the profession needs most now. For architects of the future to thrive, it’s not enough to excel at cognitive, architectural, and technical competencies, but just as important to master the human competencies such as communication, influence, leadership, and emotional intelligence.


New York Times Versus Microsoft: The Legal Status of Your AI Training Set

One of the problems the tech industry has had from the start is product contamination using intellectual property from a competitor. The tech industry is not alone, and the problem of one company illicitly acquiring the intellectual property of another and then getting caught goes back decades. If an engineer uses generative AI that has a training set contaminated by a competitor’s intellectual property, there is a decent chance, should that competitor find out, that the resulting product will be found as infringing and be blocked from sale -- with the company that had made use of that AI potentially facing severe fines and sanctions, depending on the court’s ruling. ... Ensuring any AI solution from any vendor contains indemnification for the use of their training set or is constrained to only use data sets that have been vetted as fully under your or your vendor’s legal control should be a primary requirement for use. (Be aware that if you provide AI capabilities to others, you will find an increasing number of customers will demand indemnification.) You’ll need to ensure that the indemnification is adequate to your needs and that the data sets won’t compromise your products or services under development or in market so your revenue stream isn’t put at risk.


How to calculate TCO for enterprise software

It’s obvious that hardware, once it has reached end-of-life, needs to be disposed of properly. With software, there are costs as well, primarily associated with data export. First, data needs to be migrated from the old software to the new, which can be complex given all the dependencies and database calls that might be required for even a single business process. Then there’s backups and disaster recovery. The new software might require that data to be formatted in a different way. And you still might need to keep archived copies of certain data stores from the old system for regulatory or compliance reasons. Another wrinkle in the TCO calculation is estimating how long you plan to use the software. Are you an organization that doesn’t change tech stacks if it doesn’t have to and therefore will probably run the software for as long as it still does the job? In that case, it might make sense to do a five-year TCO analysis as well as a 10-year version. On the other hand, what if your company has an aggressive sustainability strategy that calls for eliminating all of its data centers within three years, and moving as many apps as possible to SaaS alternatives. 



Quote for the day:

"One advantage of talking to yourself is that you know at least somebody's listening." -- Franklin P. Jones

Daily Tech Digest - September 29, 2022

Hybrid work is the future, and innovative technology will define it

We’re starting to see an amplification of recognition tools, of coaching platforms, of new and exciting ways to learn that are leveraging mobility and looking at how people want to work and to meet them where they are, rather than saying, “Here’s the technology, learn how to use it.” It’s more about, “Hey, we’re learning how you want to work and we’re learning how you want to grow, and we’ll meet you there.” We’re really seeing an uptake in the HR tech space of tools that acknowledge the humaneness underneath the technology itself. ... The second layer consists of the business applications we’ve come to know and love. Those include HR apps, business applications, supply chain applications, and financial applications, et cetera. Certainly, there is a major role in this distributed work environment for virtual application delivery and better security. We need to access those mission-critical apps remotely and have them perform the same way whether they’re virtual, local, or software as a service (SaaS) apps -- all through a trusted access security layer.


Web 3: How to prepare for a technological revolution

It hardly needs to be said, but over the last few decades, the internet has grown to be arguably the most integral cog ensuring a smooth-running, functioning society. It is so ingrained that almost every industry in the world would be unable to function properly without it. And this reliance will only grow as Web 3 becomes the norm, which makes it critical that we begin to educate children now on its uses and how to navigate it. Already, many of today’s adults will find it difficult to explain what Web 3 is, let alone teach the next generation how to use it. Educating children early will not only help them thrive in the future, but they will also be able to pass gained knowledge up the chain to their parents. This is, of course, just history repeating itself. It is the equivalent of kids showing their parents how to use a touch screen or work their email. But the revolution Web3 is about to bring is on a different scale to any previous technological advancement. Soon, the greatest opportunities will be solely available on the new internet, and it is critical we ensure every child has the opportunity to succeed.


Health data governance and the case for regulation

Without appropriate data governance procedures and training in place, healthcare organizations are likely to find themselves in danger of noncompliance. HIPAA violations in particular can occur at any level of an organization; if an undertrained staff member or unsecured database is operating in your organization, there’s a huge likelihood that they will eventually misuse patient data and breach HIPAA regulations. This kind of breach can lead to noncompliance, fines, legal issues, poorer patient experiences and even a loss of trust within the greater medical community. Data governance means the difference between a successful and fully operational facility and a facility that gets shut down by the government. On the other hand, when data governance principles are applied successfully in the healthcare sector, a slew of benefits outside of basic compliance can be realized. Patients feel confident that their information is safe and begin to refer their friends and family members to your network. Data becomes easier to find, label and organize for new operational use cases and emerging patient technologies. 


Closing the Gap Between Complexity and Performance in Today’s Hybrid IT Environments

Nowadays, the increasing need for security on all fronts has fueled collaboration between teams on a regular basis. This, in turn, has spurred more proactivity from an internal IT operations perspective. Proactivity, bolstered by a unified view into traffic and communication, is a key aspect of closing the gap between cloud complexity and performance — because it starts at the IT cultural level. Technical capabilities like deep observability can support team prioritization of detection and management on a more holistic level, addressing all aspects of IT infrastructure. With this, organizations can feel more confident in overcoming cloud-based challenges and mitigating connected cyber vulnerabilities as a collective force. An all-encompassing, proactive approach is needed to speedily detect cyber threats, respond to the corresponding activity, and enact a remediation plan. Within hybrid and multi-cloud environments, data and communication costs can skyrocket. The most common use cases stem from packets, which can interfere with control of and visibility into the right data. 


Blockchain and artificial intelligence: on the wave of hype

Blockchain is an innovative digital information storage system storing data in an encrypted, distributed ledger format. In work, the data is encrypted and distributed across multiple computers, which creates tamper-proof. It is a secure database that can only be read and updated by those with permission. There are a few examples on the web today of blockchain and artificial intelligence being interconnected. Academics and scientists conducted the study. But we see the two concepts working well together. ... Today’s computers are extremely fast, but they also require a constant supply of data and instructions, without which it is impossible to process information or perform tasks. Therefore, the blockchain used on standard computers requires significant computing power because of the encryption processes. Secure data monetization could be the result of combining blockchain and artificial intelligence. Monetization of collected is a source of revenue for many companies. Among the big and famous ones are Facebook and Google resources.


How MLops deployment can be easier with open-source versioning

Among the many reasons why there are a growing number of vendors in the sector, a significant one is because building and deploying ML models is often a complicated process with many manual steps. A primary goal of MLops tools is to help automate the process of building and deploying models. While automation is important, it only solves part of the complexity. A key challenge for artificial intelligence (AI) models, that was identified in a recently released Gartner report, is that approximately only half of AI models actually end up making it into production. From Guttmann’s perspective, with application development, developers tend to have a linear way of building things. This implies that for example, new code written six months after the initial development is better than the original. That same view does not tend to work with machine learning as the process involves more research and more experimentation to determine what actually works best. “Development is always money sunk into the problem until you actually see the fruits of the effort and we want to decrease that development time to a minimum,” he said.


Robotic Process Automation Will Shape the Future of Hotel Operations

On the guest-facing side of the business, automation can be applied to virtually every touchpoint of the guest journey. Marketing automation comes in the form of upsell opportunities, re-marketing or recovery campaigns in the pre-stay, pre-check-in and post-cancellation stages. In the back of the house, automation is helping the marketing, revenue and sales departments get more done with fewer resources. Integrated CRM systems have become the heart of new, guest-centric personalization strategies such as automated email marketing programs that are proving to be huge time-savers. Revenue managers are tapping automation to stay on top of pricing and demand trends. RPA reduces the common challenges presented by running a business on a fragmented tech stack. Siloed systems often lead to a great deal of manual effort, such as copying, importing, exporting data from one system to another, or the common “swivel-chair integration.” Through RPA, operators can create workflows that fill feature gaps or replicate features from other systems, saving them time and money.


Russian hackers' lack of success against Ukraine shows that strong cyber defences work

Since the invasion, Cameron said, "what we have seen is a very significant conflict in cyberspace – probably the most sustained and intensive cyber campaign on record." But she also pointed to the lack of success of these campaigns, thanks to the efforts of Ukrainian cyber defenders and their allies. "This activity has provided us with the clearest demonstration that a strong and effective cyber defence can be mounted, even against an adversary as well prepared and resourced as the Russian Federation." Cameron argued that not only does this provide lessons for what countries and their governments can do to protect against cyberattacks, but there are also lessons for organisations on how to protect against incidents, be they nation-state backed campaigns, ransomware attacks or other malicious cyber operations. "Central to this is a commitment to long-term resilience," said Cameron. "Building resilience means we don't necessarily need to know where or how the threat will manifest itself next. Instead, we know that most threats will be unable to breach our defences. And when they do, we can recover quickly and fully."


The Unlikely Journey of GraphQL

GraphQL is drawing the spotlight because refactoring or modernization of applications into microservices is stressing REST to its limits. As information consumers, we expect more from the digital platforms that we use. Shop for a product, and we also will want to find reviews, competing offers, autofill keyword search, and likely other options. Monolithic apps crack under the load, and for similar reasons, the same fate could be happening to REST, which requires pinpoint commands to specific endpoints. And with complex queries, lots of pinpoint requests. Facebook developers created GraphQL as a client specification for alleviating the bottlenecks that were increasingly cropping up when fetching data from polyglot sources to a variety of web and mobile clients. With REST, developers had to know all the endpoints. By contrast, with GraphQL, the approach is declarative: specify what data you need rather than how to produce it. While REST is imperative, GraphQL is declarative. 


Cryptojacking, DDoS attacks increase in container-based cloud systems

The Sysdig repot also noted that there has been a jump in DDoS attacks that use containers since the start of Russian invasion of Ukraine. "The goals of disrupting IT infrastructure and utilities have led to a four‑fold increase in DDoS attacks between 4Q21 and 1Q22," according to the report. "Over 150,000 volunteers have joined anti‑Russian DDoS campaigns using container images from Docker Hub. The threat actors hit anyone they perceive as sympathizing with their opponent, and any unsecured infrastructure is targeted for leverage in scaling the attacks." Otherwise, a pro-Russian hacktivist group, called Killnet, launched several DDoS attacks on NATO countries. These include, but are not limited to, websites in Italy, Poland, Estonia, Ukraine, and the United States. “Because many sites are now hosted in the cloud, DDoS protections are more common, but they are not yet ubiquitous and can sometimes be bypassed by skilled adversaries,” Sysdig noted. “Containers pre‑loaded with DDoS software make it easy for hacktivist leaders to quickly enable their volunteers.”



Quote for the day:

"Good leaders value change, they accomplish a desired change that gets the organization and society better." -- Anyaele Sam Chiyson

Daily Tech Digest - November 18, 2021

Software Engineering XOR Data Science

Metrics of the software such as number of the requests and response time and error types can be extracted by writing good logs. It is also great to do data checks in the controller class where the requests are received and forwarded to desired services before starting the operations for both SW and DS projects unless the UI side doesn’t do any check. Regex validations and data type checks can literally save the applications at both performance and security sides. An easy SQL injection may be prevented by a simple validation and it can even save the company’s future. On the DS side, there should be further monitoring besides the software metrics. Distribution of the model features and the model predictions are vital. If the distribution of any data column changes, there might be a data-shift and a new model training can be required. If the prediction results can be validated in a short time, these results must be monitored and warn the developers if the accuracy is below or above from a given threshold.


What Developers Should Look for in a Low-Code Platform

To minimize technical debt, it makes sense to reduce the number of platforms on which you build your apps. It follows that the fewer systems you have, the more value you get from each. When choosing a low-code platform, evaluate the other workflows you have deployed on that platform already, or plan to deploy in the future. Let’s say you’re considering an employee suggestion box app. If you already have, or will have, a platform where you run employee workflows, then you not only accelerate the time it takes to create the suggestion box app, but reduce additional technical debt with costly integrations. Bringing in a new platform means you have to replicate the employee information, which creates additional cost to implement and maintain. You can remove the need for that integration by having a single system of record where your employee suggestion box and existing employee workflows exist. Process governance is how you manage things like the request or intake process, change management and release management. When it’s time to release an app on your low-code platform, theoretically it should be as easy as clicking a button. But this is where technical governance comes in.


The race to secure Kubernetes at run time

Although developers now tend to test earlier and more often—or shift left, as it is commonly known—containers require holistic protection throughout the entire life cycle and across disparate, often ephemeral environments. “That makes things really challenging to secure,” Gartner analyst Arun Chandrasekaran told InfoWorld. “You cannot have manual processes here; you have to automate that environment to monitor and secure something that may only live for a few seconds. Reacting to things like that by sending an email is not a recipe that will work.” In its 2019 white paper “BeyondProd: A new approach to cloud-native security,” Google laid out how “just as a perimeter security model no longer works for end users, it also no longer works for microservices,” where protection must extend to “how code is changed and how user data in microservices is accessed.” Where traditional security tools focused on either securing the network or the individual workloads, modern cloud-native environments require a more holistic approach than just securing the build.


How organizations are beefing up their cybersecurity to combat ransomware

Educating employees about cybersecurity is another key method to help thwart ransomware attacks. Among those surveyed, 69% said their organization has boosted cyber education for employees over the last 12 months. Some 20% said they haven't yet done so but are planning to increase training in the next 12 months. Knowing how to design your employee security training is paramount. Some 89% of the respondents said they've educated employees on how to prevent phishing attacks, 95% have focused on how to keep passwords safe and 86% on how to create secure passwords. Finally, more than three-quarters (76%) of the respondents said they're concerned about attacks from other governments or nation states impacting their organization. In response, 47% said they don't feel their own government is taking sufficient action to protect businesses from cyberattacks, and 81% believe the government should play a bigger role in defining national cybersecurity protocol and infrastructure. "IT environments have become more fluid, open, and, ultimately, vulnerable," said Bryan Christ, sales engineer at Hitachi ID Systems.


DevOps transformation: Taking edge computing and continuous updates to space

One of the biggest technological pain points in space travel is power consumption. On Earth, we’re beginning to see more efficient CPU and memory, but in space, throwing away the heat of your CPU is quite hard, making power consumption the critical component. From hardware to software to the way you do processing, everything needs to account for power consumption. On the flip side, in space, there is one thing that you have plenty of (obviously): space. This means that the size of physical hardware is less of a concern. Weight and power consumption are larger issues, because those factors also impact the way microchips and microprocessors are designed. A great example of this can be found in the Ramon Space design. The company uses AI- and machine learning-powered processors to build space-resilient supercomputing systems with Earth-like computing capabilities, with the hardware components ultimately controlled by the software they have riding on top of them. The idea is to optimize the way software and hardware are utilized so applications can be developed and adapted in real time, just as they would be here on Earth.


What is LoRaWAN and why is it taking over the Internet of Things?

The wireless communication takes advantage of the long-range characteristics of the LoRa physical layer allowing a single-hop link between the end-device and one or many gateways. All modes are capable of bi-directional communication, and there is support for multicast addressing groups to make efficient use of spectrum during tasks such as Firmware Over-The-Air (FOTA) upgrades. This has led to LoRaWAN as the implementation of LoRa becoming the most widely used LPWAN technology in the unlicensed bands below 1GHz, providing battery powered sensor nodes with kilometres of range for the expanding applications across the Internet of Things. A key area for this low power, long range connectivity is smart cities. The ability to place wireless smart sensors for air quality, traffic density and transportation wherever they are needed across the urban infrastructure can give key insights into the activity of the city. The robust, low power nature of the protocol allows local authorities to run a cost-effective network with sensors in the right place, whether powered by local power lines, batteries or solar panels.


Serious security vulnerabilities in DRAM memory devices

Razavi and his colleagues have now found that this hardware-based "immune system" only detects rather simple attacks, such as double-sided attacks where two memory rows adjacent to a victim row are targeted but can still be fooled by more sophisticated hammering. They devised a software aptly named "Blacksmith" that systematically tries out complex hammering patterns in which different numbers of rows are activated with different frequencies, phases and amplitudes at different points in the hammering cycle. After that, it checks if a particular pattern led to bit errors. The result was clear and worrying: "We saw that for all of the 40 different DRAM memories we tested, Blacksmith could always find a pattern that induced Rowhammer bit errors," says Razavi. As a consequence, current DRAM memories are potentially exposed to attacks for which there is no line of defense—for years to come. Until chip manufacturers find ways to update mitigation measures on future generations of DRAM chips, computers continue to be vulnerable to Rowhammer attacks.


Money Laundering Cryptomixer Services Market to Criminals

Here's how it functions in greater detail: "Mixers work by allowing threat actors to send a sum of cryptocurrency - usually bitcoin - to a wallet address the mixing service operator owns. This sum joins a pool of the service provider's own bitcoins, as well as other cybercriminals using the service," Intel 471 says in a new report. "The initial threat actor's cryptocurrency joins the back of the 'chain' and the threat actor receives a unique reference number known as a 'mixing code' for deposited funds," the company says. "This code ensures the actor does not get back their own 'dirty' funds that theoretically could be linked to their operations. The threat actor then receives the same sum of bitcoins from the mixer's pool, muddled using the service's proprietary algorithm, minus a service fee." As a further anonymity boost, clean bitcoins can be routed to additional cryptocurrency wallets to make the connections with dirty bitcoins even more difficult for law enforcement authorities to track, Intel 471 says.


Innovation resilience during the pandemic—and beyond

Despite some decline in government spending, the overall impact on global innovation could be minimal. We have reason to be hopeful here: in 2008–09, increases in corporate R&D spending compensated for shortfalls in government R&D investment. And this appears to be true again today based on what we know so far. About 70% of the 2,500 largest global R&D spenders have released their 2020 R&D spending data. We found a healthy increase of roughly 10% in 2020, with roughly 60% of these largest R&D spenders reporting an increase. This reflects a decade-long trend of strong corporate innovation investment, which is perhaps not surprising, as the pace of progress in domains such as artificial intelligence and biotech has increased, and many new commercial growth opportunities have opened up around them. Of course, the view at the sector level is more nuanced. The pandemic-era focus on well-being and the rapid production of vaccines saw increased investments in health-related sectors, with estimates of US government investments in the development of the COVID-19 vaccines ranging from US$18 billion to $23 billion.


How active listening can make you a better leader

When you let the world intrude on a conversation, you unconsciously tell the other person that they are less important than the things around them. Instead, with every interaction strive to make a connection and show people the respect they deserve. To do this, start by limiting distractions. That means closing your laptop, muting your phone, and parking work problems at the door so you can focus and engage with this person in this moment. Of the hundreds of things littering your calendar, is any one of them more important than leading your team? ... t’s not enough simply to silence the world. Show the person that you are listening intently through your responses and body language: Make eye contact and provide brief verbal affirmations or nod, modulating the tone of your voice as well as mirroring their body mannerisms. Paraphrasing what the other person is saying can also be a helpful tool to show you understand or are seeking clarity. When you take time to validate what someone is saying, they will feel comfortable sharing more.



Quote for the day:

"The art of communication is the language of leadership." -- James Humes

Daily Tech Digest - November 16, 2021

8 tips for a standout security analyst resume

Employers increasingly expect to see hands-on experience, says Keatron Evans, principal security researcher at security education provider InfoSec. “Have you done packet capture analysis? Can you understand and parse logs or done incident response in the cloud? It’s important to have that kind of demonstrable hands-on experience verbalized in a resume,” he says. The expectation is high because even if you haven’t held a security analyst job, hands-on experience can be acquired in other ways today, such as training exercises offered by companies like InfoSec, Immersive Labs, and Pluralsight. “Before, training was mostly certificate-driven—it wasn’t geared toward proving you can do these things,” Evans says. “Now there’s simulation in the training environment, which is turning into a good gateway to get your foot in the door.” If candidates can send a five-minute screen capture of themselves performing a task, “it’s worth more than a thousand words,” Evans says. Capture-the-flag (CTF) events are another highlight to include. If you’ve placed well in a well-known CTF or completed a penetration test, put that at the top of the resume as well, he says.


Reducing Cloud Infrastructure Complexity

Organizations the world over are indeed struggling with unnecessarily complicated multi-cloud environments. Validating the global struggle, Enterprise Strategy Group recently conducted a global survey of 1,257 IT decision makers at enterprise and midmarket organizations using both public cloud infrastructure and modern on-premises private cloud environments. The results hit home, and really do cement the notion that this cloud fragmentation is getting worse as time goes on - and that there are many companies out there who are seeking a ‘savior’ toolset to get a zoomed-out view of policies, compliance, security and cost optimization. An unsurprising outcome of the survey is that there is a clear value in cloud management - yet even knowing said value, organizations are struggling with implementation. A mere 5% used consolidated cloud management tools extensively on premise, or across public and/or private cloud. This despite a burgeoning marketplace of all in one solutions like VMWare VRealize Suite, Flexera CMP, Cloudbolt, and others.


Distributed SQL Takes Databases to the Next Level

Distributed SQL is a relational database win-win. The technology’s innovations are based on lessons learned over the past thirty or so years to deliver true dynamic elasticity. The modern benefits of dynamic elasticity include the ability to add or remove nodes simply, quickly, and on-demand. The approach is self-managing, able to automatically rebalance nodes or rebalance data within those nodes while maintaining extremely high continuous availability (i.e., automatic failover). And of course, the approach includes all of the features that make relational databases so powerful, like the ability to use standard SQL (including JOINs) and to maintain ACID compliance. A distributed SQL option like MariaDB’s Xpand is architected for all nodes to work together to form a single logical and distributed database that all applications can point to, regardless of the intended use case. Whether a business needs a three-node cluster for modest workloads or hundreds, even thousands, of nodes for unlimited scalability, distributed SQL means deployments can grow or shrink on demand.


Singularity Is Fast Approaching, and It Will Happen First in the Metaverse

To elaborate on this point more, If we think about it, we have had mainframe computers that evolved to personal computers, which then evolved to mobile devices. In the case of the metaverse however, the leap or the iteration does not necessarily go to a faster device. Instead, to virtual simulations of virtual worlds and virtual environments where through VR and AR, we can finally make it possible to buy things in the real world through these environments. But, in particular, also to buy things that just happen to exist in these virtual environments. Connecting computers to the internet has catapulted them to mainstream use and marked the beginning of the dot-com era, and the last 15 years were clearly shaped by the mobile phone, which led to full mass adoption. This makes the metaverse a very practical concept. It's not just VR concerts, 3D gatherings or digital assets, the metaverse is an idea that brings these concepts together and tries to explain how they are all connected. Matthew Ball, an outspoken proponent of the metaverse, has outlined a few key ideas that show what this evolved form of the internet will look like.


Data Science Is Becoming More Tool-Oriented. Will It Kill The Science Behind?

My other concern is that the number of tools you can use might be considered more important than your data science knowledge. This situation leads to data scientists being evaluated based on tool knowledge, not science. If this happens, it will be a serious problem. Software tools are just there for turning ideas into action or value. The ideas come from data scientists who blend analytical thinking, creativity, statistics, and theory. If data scientists are forced to learn as many tools as possible, they might miss the point. They will get very quick at performing tasks thanks to the highly advanced tools. However, this is not enough for creating value out of data. What leads to creating value is first to define a problem that can be solved with data. Once a problem is defined and a solution is designed, the tools are then needed to do the tasks. I think you would agree that without a problem and solution, there is no use for advanced software tools. To sum up, we definitely need software tools and packages to perform data science. They enable us to work with large amounts of data quickly and efficiently.


Cybercriminals Target Alibaba Cloud for Cryptomining, Malware

Targeting of Alibaba is on the rise thanks to a few unique features of the service, researchers noted, and the way that cloud instances can be configured. “The default Alibaba ECS instance provides root access,” according to the analysis. “With Alibaba, all users have the option to give a password straight to the root user inside the virtual machine (VM).” This is in contrast to how other cloud service providers architect their storage access, researchers pointed out. In most cases, the principle of least privilege is front and center, with different options such as not allowing Secure Shell (SSH) authentication over user and password, or allowing asymmetric cryptography authentication. That way, if cyberattackers gain credentials, entering with only low-privilege access would require them to make an “enhanced effort” to escalate the privileges, according to Trend Micro: “Other cloud service providers do not allow the user to log in via SSH directly by default, so a less privileged user is required.” 


The difference between insights, actionable information and intelligent data

Insight might ultimately stem from data, but that doesn’t mean data – or even ‘intelligent’ data – should be conflated with insight. They aren’t the same thing. That’s not to say that adding intelligence to data isn’t important, of course. After all, raw and unprocessed data can only gain value once we’ve added intelligence – once we have, in other words, annotated and quantified that data. This principle is, of course, magnified in the context of what’s known as ‘big data’, which I generally take to mean datasets that must be measured in terms of gigabytes and exabytes. Through the use of machine learning, it’s possible to add intelligence to this kind of massive dataset through annotating it with sentiment, emotions, topics, and other useful variables. ... As we ascend our knowledge pyramid, it’s easy to see how one might confuse actionable information with true insight. Nonetheless, there is an important distinction between the two terms, especially when it comes to their respective relationships to long-term strategy.


What is Customer Identity and Access Management (CIAM)?

A CIAM system typically resides in the cloud and operates under a software-as-a-service (SaaS) model. It relies on built-in connectors and APIs to tie together various enterprise applications, systems, and data repositories. This makes it possible to combine features, including customer registration, account management, directory services, and authentication. When a customer visits a website or calls in, for example, the CIAM solution handles the authentication process (using a password, single sign-on, biometrics, or multiple factors, for example). It’s also adept at juggling different protocols, including SAML, OpenID Connect, OAuth and FIDO. Once a customer signs in, it’s possible to place an order, track delivery, update a user profile, and handle other account-related tasks. Another benefit of CIAM is that it delivers risk-based authentication (RBA), which is sometimes referred to as adaptive authentication. This means that a system can look for signs and signals -- such as a user’s IP address, User-Agent HTTP headers, the date and time of access, and other factors


Researchers Spot Comeback of the Emotet Botnet

The development is concerning but not surprising for those fighting large-scale botnets. Emotet and Trickbot were essentially run by different departments of one cybercriminal organization that's based in Russia, says Alex Holden, CISO of Hold Security, a Wisconsin-based security consultancy that studies the cybercriminal underground. Researchers have long noticed close associations, with Emotet distributing Trickbot and vice versa. Both have been linked to distribution of ransomware including Ryuk and Conti. "We knew that it [Emotet] would come back," Holden says. "It was a matter of time. But it may signal more battles are ahead. Emotet was the "biggest and baddest" botnet before it was taken down, says James Shank, senior security evangelist and chief architect, Community Services with Team Cymru. A new version of Emotet is being distributed by Trickbot, says Marcus Hutchins, a malware researcher with Kryptos Logic who is also part of Cryptolaemus, a notable group of top security researchers and systems administrators dedicated to fighting Emotet. Emotet's return will likely will eventually mean greater distribution of ransomware. 


Council Post: Building Human-In-The-Loop Systems

Human-in-the-loop systems are essentially about providing this context to AI models. This context could be in various terms. It includes removing bias from models so they adhere to ethical standards, providing situational awareness information to improve predictions or dispensing a final oversight before a decision is made. Context is also the AI system providing context to the human being for further action. When this critical piece of information is missing, it leads to what is popularly known as “the black box problem” where the users don’t really understand how the model has churned data and arrived at a decision. Given the fact that algorithms are driving parts of our lives now with their use in driving cars, giving product recommendations, making investment decisions and even predicting employee attrition, it is becoming integral for stakeholders to understand and trust these AI operations. The key to designing a successful human-in-the-loop system is solving this challenge of two-way communication of context.



Quote for the day:

"A lot of people have gone farther than they thought they could because someone else thought they could." -- Zig Zigler