Showing posts with label spyware. Show all posts
Showing posts with label spyware. Show all posts

Daily Tech Digest - March 27, 2026


Quote for the day:

“Our greatest fear should not be of failure … but of succeeding at things in life that don’t really matter.” -- Francis Chan


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


Digital Transformation Is Not A Technology Problem; It’s An Addition Problem

In the Forbes Tech Council article, Andrew Siemer argues that the staggering failure rate of digital transformation—with some reports suggesting up to 88% of initiatives fall short—stems from a fundamental behavioral bias known as the "addition default." Drawing on research from the University of Virginia, Siemer explains that humans instinctively attempt to solve complex problems by adding new elements, such as additional software platforms or dashboards, rather than subtracting existing inefficiencies. This compulsion to add is particularly pronounced under cognitive load, leading companies to accumulate technical debt and complexity even as global digital transformation investments are projected to reach $4 trillion by 2028. Siemer contends that the most successful organizations are those that resist this additive instinct and instead focus on "removing work." He challenges leaders to reconsider their transformation roadmaps, which often default to implementation and replacement, and instead prioritize radical simplification. By asking what processes should be stopped rather than what technology should be started, businesses can move beyond the cycle of unsuccessful investment. Ultimately, digital transformation is not merely a technological challenge but a strategic discipline of subtraction that requires shifting focus from scaling tools to streamlining core operations.


Vendors race to build identity stack for Agentic AI

The rapid rise of autonomous AI agents, capable of executing complex tasks and financial transactions at machine speed, has triggered a competitive race among identity management vendors to develop specialized "identity stacks." Traditional security frameworks, designed for human interaction and intermittent logins, are proving insufficient for managing autonomous entities that lack natural human friction. Consequently, enterprises face significant visibility and accountability gaps regarding agent activity and permissions. To address these vulnerabilities, major players like Ping Identity have launched dedicated frameworks such as "Identity for AI," which focuses on real-time enforcement and delegated authority rather than shared human credentials. Simultaneously, firms like Wink and Vouched are integrating multimodal biometrics to anchor agent actions to verifiable human consent, particularly for scoped payment authorizations that limit transaction amounts. Other innovators, including Saviynt and Dock Labs, are introducing governance platforms and open protocols to manage agent-to-agent trust and verify intent via cryptographic credentials. By shifting enforcement to runtime and treating AI agents as a distinct identity class, these vendors aim to provide the necessary guardrails for the emerging era of agentic commerce, ensuring that autonomous systems remain securely anchored to provable human oversight and rigorous auditable standards.


Inside a Modern Fraud Attack: From Bot Signups to Account Takeovers

The article "Inside a Modern Fraud Attack: From Bot Signups to Account Takeovers" highlights the evolution of digital fraud into a sophisticated, multi-stage "relay race" that bypasses traditional security measures. These attacks typically begin with large-scale automation, utilizing bots and scripts to create numerous accounts using compromised emails and residential proxies to mimic legitimate residential traffic. As the attack progresses, fraudsters pivot from automated methods to slower, human-driven activities to blend in with normal user behavior. This tactical shift culminates in account takeovers and monetization through credential stuffing or phishing. The article argues that relying on single-signal defenses, such as IP reputation or email validation alone, is increasingly ineffective and prone to false positives. Instead, organizations must adopt a multi-signal correlation strategy that unifies IP intelligence, device fingerprinting, identity verification, and behavioral analytics. By evaluating these data points in context throughout the entire user journey, security teams can effectively identify coordinated abuse clusters while maintaining a low-friction experience for genuine customers. Ultimately, outpacing modern fraud requires a holistic, integrated risk model that moves beyond disconnected, point-in-time checks to address the full lifecycle of complex cyberattacks.


What IT leaders need to know about AI-fueled death fraud

AI-fueled death fraud is an emerging cybersecurity threat where criminals leverage generative AI to produce highly convincing, fake death certificates and legal documents. By faking a customer’s passing or impersonating heirs, fraudsters exploit empathetic bereavement workflows to seize control of sensitive accounts, financial assets, and personal data. This tactic is particularly dangerous because many enterprise identity systems are designed for long-term users and lack robust protocols for managing post-mortem transitions. Currently, the absence of centralized, real-time government databases for death verification creates a significant security gap that IT leaders must address. Beyond direct financial theft, attackers often use compromised accounts to launch sophisticated social engineering campaigns against the victim’s contacts. To mitigate these risks, experts suggest that IT leaders move away from simple credential-based access toward delegated authority frameworks and behavioral analytics that monitor for sudden, unexplained shifts in account activity. Furthermore, organizations should update terms of service to define digital legacy procedures. By formalizing verification processes and integrating rigorous oversight, businesses can better protect customers’ digital estates from being weaponized. This approach ensures the human element of bereavement does not become a permanent vulnerability in an increasingly automated world.


Vibe coding your own enterprise apps is edgy business

"Vibe coding," the practice of using AI agents to generate software through natural language prompts, is revolutionizing enterprise application development while introducing significant operational risks. As detailed in the CIO article, this shift enables companies to rapidly prototype and build custom internal tools—such as dashboards and workflow systems—often bypassing traditional procurement processes and expensive external agencies. While the speed and cost-effectiveness of this approach are seductive, IT leaders warn that it can quickly lead to a maintenance nightmare. Unlike road-tested SaaS platforms, vibe-coded applications place the entire burden of security, integration, and long-term support directly on the organization. Furthermore, the ease of creation risks fostering a chaotic environment of "shadow IT," where unsupervised employees generate technical debt and fragmented systems lacking robust architecture. Experts highlight a "seduction phase" where tools initially appear brilliant but later fail under the weight of production requirements or data integrity concerns. Consequently, CIOs are urged to implement strict governance, ensure human-in-the-loop oversight, and maintain a cautious distance from using experimental AI for mission-critical systems. Ultimately, vibe coding offers a powerful competitive edge for innovation, yet successful enterprise adoption requires balancing rapid creativity with disciplined engineering standards to prevent a future of unmanageable and broken software.


The CISO’s guide to responding to shadow AI

The rapid proliferation of artificial intelligence has introduced a new cybersecurity challenge known as shadow AI, where employees utilize unapproved AI tools to boost productivity. This CSO Online guide outlines a strategic four-step framework for CISOs to manage these hidden risks effectively. First, leaders must calmly assess risks by evaluating data sensitivity and potential for breaches rather than reacting impulsively. Understanding the underlying motivations for shadow AI use is the second step, as it often reveals unmet business needs or productivity gaps. Third, CISOs must decide whether to strictly block these tools or integrate them through formal vetting processes involving legal and security reviews. Finally, the article emphasizes evolving AI governance by improving employee education and creating clear pathways for tool approval. Rather than relying solely on punishment, organizations should foster a culture of accountability where responsibility for AI safety is shared across all departments. Ultimately, while shadow AI cannot be entirely eliminated, it can be mitigated through proactive management and transparent communication. By viewing these instances as opportunities to refine policy and secure additional resources, CISOs can transform shadow AI from a liability into a catalyst for secure innovation.


Why ‘Invisible AI’ is at the heart of durable value creation for enterprises

In the article "Why Invisible AI is at the Heart of Durable Value Creation for Enterprises," Ankor Rai argues that the most impactful artificial intelligence initiatives are those integrated so deeply into operational workflows that they become virtually invisible. While many organizations struggle to scale AI beyond experimental models, durable value is found when intelligence is embedded directly into the fabric of daily processes to stabilize operations and reduce friction. This "invisible AI" shifts the focus from dramatic transformations to preventative success, where value is measured by the absence of failures, such as equipment downtime or stalled workflows. Rai highlights that the primary challenge is bridging the gap between insight and action; effective systems deliver real-time signals at the precise moment of decision rather than through separate reports. By automating repetitive, high-volume tasks like data reconciliation and anomaly detection, enterprises do not replace human expertise but rather protect it, allowing leadership to focus on nuanced strategy and complex problem-solving. Ultimately, the maturity of enterprise technology is evidenced by its ability to quietly improve reliability and compress error margins. This invisible integration creates a compounding competitive advantage rooted in operational resilience, consistency, and the preservation of organizational bandwidth over time.


Intermediaries Driving Global Spyware Market Expansion

The proliferation of third-party intermediaries, including resellers and exploit brokers, is significantly expanding the global spyware market by undermining transparency efforts and bypassing government restrictions. According to a recent report from the Atlantic Council, these entities serve as the operational backbone of the industry, enabling both sanctioned nations and private actors to acquire advanced surveillance tools regardless of trade bans or diplomatic tensions. By muddying supply chains and obscuring the origins of offensive cyber capabilities, intermediaries allow countries with limited technical expertise to purchase sophisticated hacking software on the open market. This evolution has transformed the spyware ecosystem into a modular supply chain where commercial vendors now outpace traditional state-sponsored groups in zero-day exploit attribution. Despite international diplomatic efforts like the Pall Mall Process, regulating this "shadowy" marketplace remains difficult because the complex corporate structures of these brokers are designed specifically to make export controls irrelevant. Experts suggest that establishing "Know Your Vendor" requirements and formal certification processes for resellers are essential steps toward gaining visibility. Ultimately, the lack of transparency driven by these intermediaries continues to pose a severe threat to human rights and global security as surveillance technology spreads unchecked across borders.


Designing self-healing microservices with recovery-aware redrive frameworks

In modern cloud-native architectures, traditional retry mechanisms often exacerbate system failures by triggering "retry storms" that overwhelm recovering services. To address this, the article introduces a recovery-aware redrive framework specifically designed to create truly self-healing microservices. This framework operates through three critical stages: failure capture, health monitoring, and controlled replay execution. Initially, failed requests are persisted in durable queues with full metadata to ensure exact replay semantics. Instead of immediate retries, a monitoring function continuously evaluates downstream service health metrics, such as error rates and latency. Once recovery is confirmed, queued requests are replayed at a controlled, throttled rate to prevent further network congestion. This decoupled approach ensures that all failed requests are eventually processed while maintaining overall system stability and avoiding dangerous cascading failures. By integrating real-time health data with a gated replay mechanism, the framework enhances observability and provides a platform-agnostic solution for complex distributed systems. Ultimately, this method reduces the need for manual intervention, improves long-term reliability, and allows engineers to track recovery events with high precision, making it a vital evolution for resilient microservice design in high-scale environments where maintaining uptime is paramount.


Architectural Governance at AI Speed

In the era of generative AI, where code has become a commodity, the primary challenge for software organizations is no longer production but architectural alignment. The InfoQ article "Architectural Governance at AI Speed" argues that traditional review boards and centralized oversight can no longer scale with the sheer volume of AI-generated output. Instead, it proposes "Declarative Architecture," a model that transforms Architectural Decision Records (ADRs) and Event Models into machine-enforceable guardrails. By utilizing vertical slices—self-contained units of behavior—teams can automate code generation and validation, ensuring that the conformant path becomes the path of least resistance. A key mechanism described is the "Ralph Wiggum Loop," an AI-looping technique where agents iteratively refine implementations until they meet specific Given-When-Then criteria. This approach enables decentralized governance by allowing teams to work independently while maintaining cohesion through shared collaborative modeling. Ultimately, the shift from "dumping left" to automated, declarative systems allows human architects to move beyond policing implementation details and focus on high-level intent and product alignment. By embedding governance directly into the development lifecycle, organizations can achieve rapid delivery without sacrificing system integrity or consistency across team boundaries.

Daily Tech Digest - June 16, 2025


Quote for the day:

"A boss has the title, a leader has the people." -- Simon Sinek


How CIOs are getting data right for AI

Organizations that have taken steps to better organize their data are more likely to possess data maturity, a key attribute of companies that succeed with AI. Research firm IDC defines data maturity as the use of advanced data quality, cataloging and metadata, and data governance processes. The research firm’s Office of the CDO Survey finds firms with data maturity are far more likely than other organizations to have generative AI solutions in production. ... “We have to be mindful of what we put into public data sets,” says Yunger. With that caution in mind, Servier has built a private version of ChatGPT on Microsoft Azure to ensure that teams benefit from access to AI tools while protecting proprietary information and maintaining confidentiality. The gen AI implementation is used to speed the creation of internal documents and emails, Yunger says. In addition, personal data that might crop up in pharmaceutical trials must be treated with the utmost caution to comply with the European Union’s AI Act,  ... To achieve what he calls “sustainable AI,” AES’s Reyes counsels the need to strike a delicate balance: implementing data governance, but in a way that does not disrupt work patterns. He advises making sure everyone at your company understands that data must be treated as a valuable asset: With the high stakes of AI in play, there is a strong reason it must be accurately cataloged and managed.


Alan Turing Institute reveals digital identity and DPI risks in Cyber Threats Observatory Workshop

The trend indicates that threat actors could be targeting identity mechanisms such as authentication, session management, and role-based access systems. The policy implication for governments translates to a need for more detailed cyber incident reporting across all critical sectors, the institute recommends. An issue is the “weakest link” problem. A well-resourced sector like finance might invest in strong security, but their dependence on, say, a national ID system means they are still vulnerable if that ID system is weak. The institute believes this calls for viewing DPI security as a public good. Improvements in one sector’s security, such as “hardened” digital ID protocols, could benefit other sectors’ security. Integrating security and development teams is recommended as is promoting a culture of shared cyber responsibility. Digital ID, government, healthcare, and finance must advance together on the cybersecurity maturity curve, the report says, as a weakness in one can undermine the public’s trust in all. The report also classifies CVEs by attack vectors: Network, Local, Adjacent Network, and Physical. Remote Network threats were dominant, particularly affecting finance and digital identity platforms. But Local and Physical attack surfaces, especially in health and government, are increasingly relevant due to on-premise systems and biometric interfaces, according to the Cyber Threat Observatory.


The Advantages Of Machine Learning For Large Restaurant Chains

Machine learning can not only assist in the present activities but contribute to steering long-term planning and development. Decision-makers can use these trends to notice opportunities to explore new markets, develop new products, or redistribute resources when they discover the patterns across the different locations, customer groups, and product categories. These insights dig deeper into the superficial data and reveal trends that might not have been apparent by just manual analysis. The capability to make data-driven decisions becomes even more significant with the growth of restaurant chains. Machine learning tools provide scalable insights that can be applied in parallel with the rest of the business objectives when combined with other technologies like a drive thru system or cloud-based analytics platforms. The opening of a new venue or the optimizing of an advertisement campaign, machine learning enables the management levels to have the information needed to make a decision with assured confidence and competence. ... Machine learning is transforming how major restaurant chains run their business, providing an unbeatable mix of accuracy, speed, and flexibility over their older equivalents. 


How Staff+ Engineers Can Develop Strategic Thinking

For risk and innovation, you need to understand what your organization values the most. Everybody has a culture memo and a set of tenets they follow, but these are part of unsaid rules, something that every new hire will learn by the first week of their onboarding, which is not written out loud and clear. In my experience, there are different kinds of organizations. Some care about execution, like results above everything, top line, bottom line. Others care about data-driven decision-making, customer sentiment, and keeping adapting. There are others who care about storytelling and relationships. What does this really mean? If you fail to influence, if you fail to tell a story about what ideas you have, what you're really trying to do, to build trust and relationships, you may not succeed in that environment, because it's not enough for you to be smart and knowing it all. You also need to know how to convey your ideas and influence people. When you talk about innovation, there are companies that really pride themselves on experimentation, staying ahead of the curve. You can look at this by how many of them have an R&D department, and how much funding they put into that. Then, what's their role in the open-source community, and how much they contribute towards it.


Legal and Policy Responses to Spyware: A Primer

There have been a number of international efforts to combat at least some aspects of the harms of commercial spyware. These include the US-led Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware and the Pall Mall Process, an ongoing multistakeholder undertaking focussed on this issue. So far, principles, norms, and calls for businesses to comply with the United Nations Guiding Principles on Business and Human Rights (UNGPs) have emerged, and Costa Rica has called for a full moratorium, but no well-orchestrated international action has been fully brought to fruition. However, private companies and individuals, regulators, and national or regional governments have taken action, employing a wide range of legal and regulatory tools. Guidelines and proposals have also been articulated by governmental and non-governmental organizations, but we will focus here on measures that are existent and, at least in theory, enforceable. While some attempts at combating spyware, like WhatsApp’s, have been effective, others have not. Analyzing the strengths and weaknesses of each approach is beyond the scope of this article, and, considering the international nature of spyware, what fails in one jurisdiction may be successful in another.


Red Teaming AI: The Build Vs Buy Debate

In order to red team your AI model, you need to have a deep understanding of the system you are protecting. Today’s models are complex multimodal, multilingual systems. One model might take in text, images, code, and speech with any single input having the potential to break something. Attackers know this and can easily take advantage. For example, a QR code might contain an obfuscated prompt injection or a roleplay conversation might lead to ethical bypasses. This isn’t just about keywords, but about understanding how intent hides beneath layers of tokens, characters, and context. The attack surface isn’t just large, it’s effectively infinite. ... Building versus buying is an age-old debate. Fortunately, the AI security space is maturing rapidly, and organizations have a lot of choices to implement from. After you have some time to evaluate your own criteria against Microsoft, OWASP and NIST frameworks, you should have a good idea of what your biggest risks are and key success criteria. After considering risk mitigation strategies, and assuming you want to keep AI turned on, there are some open-source deployment options like Promptfoo and Llama Guard, which provide useful scaffolding for evaluating model safety. Paid platforms like Lakera, Knostic, Robust Intelligence, Noma, and Aim are pushing the edge on real-time, content-aware security for AI, each offering slightly different tradeoffs in how they offer protection. 


The Impact of Quantum Decryption

There are two key quantum mechanical phenomena, superposition and entanglement, that enable qubits to operate fundamentally differently than classical bits. Superposition allows a qubit to exist in a probabilistic combination of both 0 and 1 states simultaneously, significantly increasing the amount of information a small number of qubits can hold.  ... Quantum decryption of data stolen using current standards could have pervasive impacts. Government secrets, more long-term data, and intellectual property remain at significant risk even if decrypted years after a breach. Decrypted government communications, documents, or military strategies could compromise national security. An organization’s competitive advantage could be undermined by trade secrets being exposed. Meanwhile, data such as credit card information will diminish over time due to expiration dates and the issuance of new cards. ... For organizations, the ability of quantum computers to decrypt previously stolen data could result in substantial financial losses due to data breaches, corporate espionage, and potential legal liabilities. The exposure of sensitive corporate information, such as trade secrets and strategic plans, could provide competitors with an unfair advantage, leading to significant financial harm. 


Don't let a crisis of confidence derail your data strategy

In an age of AI, the options that range from on-premise facilities to colocation, or public, private and hybrid clouds, are business-critical decisions. These decisions are so important because such choices impact the compliance, cost efficiency, scalability, security, and agility that can make or break a business. In the face of such high stakes, it is hardly surprising that confidence is the battleground on which deals for digital infrastructure are fought. ... Commercially, Total Cost of Ownership (TCO) has become another key factor. Public cloud was heavily promoted on the basis of lower upfront costs. However, businesses have seen the "pay-as-you-go" model lead to escalating operational expenses. In contrast, businesses have seen the cost of colocation and private cloud become more predictable and attractive for long-term investment. Some reports suggest that at scale, colocation can offer significant savings over public cloud, while private cloud can also reduce costs by eliminating hardware procurement and management. Another shift in confidence has been that public cloud no longer guarantees the easiest path to growth. Public cloud has traditionally excelled in rapid, on-demand scalability. This agility was a key driver for adoption, as businesses sought to expand quickly.


The Anti-Metrics Era of Developer Productivity

The need to measure everything truly spiked during COVID when we started working remotely, and there wasn’t a good way to understand how work was done. Part of this also stemmed from management’s insecurities about understanding what’s going on in software engineering. However, when surveyed about the usefulness of developer productivity metrics, most leaders admit that the metrics they track are not representative of developer productivity and tend to conflate productivity with experience. And now that most of the code is written by AI, measuring productivity the same way makes even less sense. If AI improves programming effort by 30%, does that mean we get 30% more productivity?” ... Whether you call it DevEx or platform engineering, the lack of friction equals happy developers, which equals productive developers. In the same survey, 63% of developers said developer experience is important for their job satisfaction. ... Instead of building shiny dashboards, engineering leads should focus on developer experience and automated workflows across the entire software development life cycle: development, code reviews, builds, tests and deployments. This means focusing on solving real developer problems instead of just pointing at the problems.


Why banks’ tech-first approach leaves governance gaps

Integration begins with governance. When cybersecurity is properly embedded in enterprise-wide governance and risk management, security leaders are naturally included in key forums, including strategy discussions, product development, and M&A decision making. Once at the table, the cybersecurity team must engage productively. They must identify risks, communicate them in business terms AND collaborate with the business to develop solutions that enable business goals while operating within defined risk appetites. The goal is to make the business successful, in a safe and secure manner. Cyber teams that focus solely on highlighting problems risk being sidelined. Leaders must ensure their teams are structured and resourced to support business goals, with appropriate roles and encouragement of creative risk mitigation approaches. ... Start by ensuring there is a regulatory management function that actively tracks and analyzes emerging requirements. These updates should be integrated into the enterprise risk management (ERM) framework and governance processes—not handled in isolation. They should be treated no differently than any other new business initiatives. ... Ultimately, aligning cyber governance with regulatory change requires cross-functional collaboration, early engagement, and integration into strategic risk processes, not just technical or compliance checklists.

Daily Tech Digest - February 06, 2024

Championing privacy-first security: Harmonizing privacy and security compliance

When security solutions are crafted with privacy as a central consideration, organisations can deploy robust security measures while safeguarding the personal data of their customers and employees. A comprehensive cost-benefit analysis reveals significant advantages in adopting a privacy-first approach to security. For instance, proactively blocking malware before it infiltrates an organisation’s systems can avert a potential data breach. Given the average cost of US$4.45 million in 2023, coupled with the consequential impact on brand reputation and legal ramifications, preventing even a single data breach becomes paramount for any company. Hence, the importance of industry-leading security measures is indisputable. Any reputable security company should provide solutions that limit its access to sensitive data and ensure the protection of the personal data entrusted to its care. ... A privacy-first security program assesses the risks associated with both implementing and not implementing security measures. If the advantages of deploying a security solution, such as email scanning, outweigh the drawbacks – which is highly probable – the organisation should proceed with the careful implementation of this capability.


Far Memory Unleashed: What Is Far Memory?

Far memory is a memory tier between DRAM and Flash that has a lower cost per GB than DRAM and a higher performance than Flash. Far memory works by disaggregating memory and allowing nodes or machines to access the memory of a remote node/machine via compute express link. Memory is the most contested and least elastic resource in a data center. Currently, servers can only use local memory, which may be scarce on the local system but abundant on other underutilized servers. With far memory, local machines can use remote machine’s memory. By introducing far memory into the memory tier and moving less frequently accessed data to far memory, the system can perform efficiently with low DRAM and reduce the total cost of ownership. Far memory uses a remote machine’s memory as a swap device, either by using idle machines or by building memory appliances that only serve to provide a pool of memory shared by many servers. This approach optimizes memory usage and reduces over-provisioning. However, far memory also has its own challenges. Swapping out memory pages to remote machines increases the failure domain of each machine, which can lead to a catastrophic failure of the entire cluster. 


Four reasons your agency's security infrastructure isn't agile enough

There are four key considerations for integrating security architecture effectively in an agile environment - Cross-Functional Collaboration: Security experts must actively engage with developers, testers, and product owners. Collaborating with experts helps create a shared understanding of security requirements and facilitates quick resolution of security-related issues. Embedding security professionals within Agile teams can enhance real-time collaboration and ensure consistent security controls. Security Training and Awareness: Given the rapid pace of an Agile sprint, all team members should be equipped with the knowledge to write secure code. ... Foster a Security Culture: Foster a culture where security is seen as everyone's responsibility, not just the security team's. Adapt the organizational mindset to value security equally with other business objectives. ... Security Champions within Agile Teams: Identify and nurture 'Security Champions' within each Agile team. These individuals with a keen interest in security act as a bridge between the security team and their respective agile teams. They help promote security best practices, ensuring security is not overlooked amidst other technical considerations.


AI Legislation: Enterprise Architecture Guide to Compliance

Artificial intelligence (AI) tools are so easy to leverage that they can be used by anyone within your organization without technical support. This means that you need to keep a careful eye on, not just the authorized applications you leverage, but what AI tools your colleagues could be using without authorization. In leveraging AI tools to generate content for your organization, your employees could unwittingly input private data into the public instance of ChatGPT. Not only does this share that data with ChatGPT's vendor, OpenAI, but it actually trains ChatGPT on that content, meaning the AI tool could potentially output that information to another user outside of your organization. Alternatively, overuse of generative AI tools without proper supervision could lead to factual or textual errors being published to your customers. Gen AI tools need careful supervision to ensure they don't "hallucinate" or produce mistakes, as they are unable to self-edit. It's equally important to be able to report back to legislators on what AI is being used across your company, so they can see you're compliant. This will likely become a regulatory requirement in the near future.


Choosing a disaster recovery site

The first option is to set up your own secondary DR data center in a different location from your primary site. Many large enterprises go this route; they build out DR infrastructure that mirrors what they have in production so that, at least in theory, it can take over instantly. The appeal here lies in control. Since you own and operate the hardware, you dictate compatibility, capacity, security controls and every other aspect. You’re not relying on any third party. The downside of course, lies in cost. All of that redundant infrastructure sitting idle doesn’t come cheap. ... The second approach is to engage an external DR service provider to furnish and manage a recovery site on your behalf. Companies like SunGard built their business around this model. The appeal lies in offloading responsibility. Rather than build out your own infrastructure, you essentially reserve DR data center capacity with the provider. ... The third option for housing your DR infrastructure is leveraging the public cloud. Market leaders like AWS and Azure offer seemingly limitless capacity that can scale to meet even huge demands when disaster strikes. 


How CISOs navigate policies and access across enterprises

Simply speaking, if existing network controls are now being moved to the cloud, the scope of technical controls does not drastically differ from legacy approaches. The technology, however, has massively evolved towards platform-centric controls, and that for a good reason. Isolated controls cause complexity, and if you are moving your perimeter to a hyperscaler, both your users and their devices will no longer be managed by the corporate on-prem security controls either. A good CASB to broker between user and data is key, as is identity and access management. What’s now new is workload protection requirements á la CSAP technology. In addition to increasing sophistication and the number of security threats and successful breaches, most enterprises further increase risk by “rouge IT” teams leveraging cloud environments without the awareness and management by security teams. Cloud deployments are typically deployed faster and with less planning and oversight than data center or on-site environment deployments. Cloud security tools should be an extension of your other premise-based tools for ease of management, consistency of policy enforcement and cost savings due to additional purchase commitments, training, and certification non-duplicity. 


What to Know About Machine Customers

In the realm of customer service and support, machine customers are like virtual assistants or smart devices (think of Siri or Alexa) that carry out customer service tasks on behalf of actual human customers. Alok Kulkarni, CEO and Co-founder of Cyara, says the emergence of machine customers introduces a new dynamic, requiring organizations to adapt their existing support strategies. “This might include developing specific interfaces and communication channels tailored for interactions with machine customers,” he explains in an email interview. Organizations must create additional self-service options specifically designed for machine customers. “Unlike traditional customer support approaches, catering to machine customers requires a nuanced understanding of their specific needs and operational dynamics,” Kulkarni explains. This means designing self-service interfaces that are not only user-friendly for machines but also align with the intricacies of autonomous negotiation and purchasing processes. These interfaces should empower machine customers to navigate through various stages of transactions autonomously, from product selection to payment processing, ensuring a streamlined and frictionless experience.


Google: Govs Drive Sharp Growth of Commercial Spyware Cos

Much of the concern has to do with the explosion in the availability of tools and services that allow governments and law enforcement to break into target devices with impunity, harvest information from them, and spy unchecked on victims. The vendors selling these tools — most of which are designed for mobile devices — have often openly pitched their wares as legitimate tools that aid in law enforcement and counter-terrorism efforts. But the reality is that repressive governments have routinely used spyware tools against journalists, activists, dissidents, and opposition party politicians, said Google. The company's report cites three instances of such misuse: one that targeted a human rights defender working with a Mexico-based rights organization; another against an exiled Russian journalist; and the third against the co-founder and director of a Salvadorian investigative news outlet. The researcher attributes much of the recent growth in the CSV market to strong demand from governments around the world to outsource their need for spyware tools rather than have an advanced persistent threat build them in-house. 


How To Build Autonomous Agents – The End-Goal for Generative AI

From a technology perspective, there are five elements that go into autonomous agent designs: the agent itself, for processing; tools, for interaction; prompt recipes, for prompting and planning; memory and context, for training and storing data; and APIs / user interfaces, for interaction. The agent at the center of this infrastructure leverages one or more LLMs and the integrations with other services. You can build this integration framework yourself, or you can bring in one of the existing orchestration frameworks that have been created, such as LangChain or LlamaIndex. The framework should provide the low-level foundational model APIs that your service will support. It connects your agent to the resources that you will use as part of your overall agent, including everything from existing databases and external APIs, to other elements over time. It also has to take into account what use cases you intend to deliver with your agent, from chatbots to more complex autonomous tasks. Existing orchestration frameworks can take care of a lot of the heavy lifting involved in managing LLMs, which makes it much easier and faster to build applications or services that use GenAI.


How Platform and Site Reliability Engineering Are Evolving DevOps

Actually, failure should not just be OK but welcome. Most organizations are averse to failure, but it’s only through our failures in these spaces that we can learn and grow and figure out how to best position, leverage, and continue to imagine the roles of DevOps, platform engineers, and SRE. I’ve seen this play out in large companies that went all in on DevOps and then realized that they needed a team focused on breaking down any barriers that presented themselves to developers. At scale, DevOps - even with the tools provided by the internally focused platform engineering team - didn’t really cut it. These companies then integrated the SRE function, which filled DevOps’ reliability and scalability gaps. That worked until these companies realized that they were reinventing the wheel - dozens of times. Different engineering teams within the organization were doing things just differently enough - different setups, different processes, different expectations - that they needed separate setups to put out a service. The SREs were seeing all of this after the fact, which led them to circle back to the realization that different teams needed to be using the same development building blocks. Frustrating? Yes. The cost of increasing efficiency in the future? Absolutely.



Quote for the day:

“It’s better to look ahead and prepare, than to look back and regret.” -- Jackie Joyner-Kersee

Daily Tech Digest - November 26, 2023

European Commission Failing to Tackle Spyware, Lawmakers Say

As that deadline looms, lawmakers accused the European Commission of failing to act. On Thursday, they passed a resolution that attempts to force the European Commission to present the legislative changes recommended in May by the PEGA Committee. At a plenary session in Strasbourg, EU lawmakers said that the European Commission's inaction had facilitated an uptick in recent spyware cases. Such cases have included the alleged targeting of exiled Russian journalist Galina Timchenko using Pegasus when she was based in Germany, as well as the Greek government's attempt to thwart investigations into spyware abuse by its ministers. In contrast to the EU approach, lawmakers highlighted the U.S. government's blacklisting in July of European spyware firms Intellexa and Cytrox and the Biden administration's citing of the companies' risk to U.S. national security and foreign policy. Speaking at the Thursday plenary, EU Justice Commissioner Didier Reynders condemned using spyware to illegally intercept personal communications, adding that member states cannot use "national security" as a legal basis to circumvent existing laws and indiscriminately target their citizens.


Mastering the art of differentiation: Vital competencies for thriving in the age of artificial intelligence

With AI designed to make decisions using algorithms grounded in data and patterns, these algorithms are only as dependable as the data they are trained on and can be influenced by the assumptions and biases of their creators. Consequently, it is imperative to employ critical thinking skills to assess AI decisions and guarantee that they align with our values and objectives. Moreover, critical thinking is essential for resolving complex issues that may exceed AI’s capabilities. Developing critical thinking skills involves cultivating the ability to analyze, evaluate, and synthesize information to make informed decisions. ... In this rapidly evolving modern landscape, heavily influenced by digital technologies, cultivating a high LQ is indispensable for the long-term success and sustainability of both employees and organizations. In the business world, change is constant, making continuous learning and development essential at every level of the organization to ensure we consistently make the right decisions. High LQ empowers employees to foster innovation and creativity, cultivate resilience, and position themselves more effectively to future-proof their careers. 


Digital advocacy group criticizes current scope of the EU AI Act

The group’s core argument is that the AI Act now goes beyond its original intended scope, and should instead remain focused on high-risk use cases, rather than being directed at specific technologies. Digital Europe also warned that the financial burden the Act could place on companies wanting to bring AI-enabled products to market could make operating out of the EU unstainable for smaller organizations. “For Europe to become a global digital powerhouse, we need companies that can lead on AI innovation also using foundation models and GPAI (general-purpose AI),” the statement read. “As European digital industry representatives, we see a huge opportunity in foundation models, and new innovative players emerging in this space, many of them born here in Europe. Let’s not regulate them out of existence before they get a chance to scale, or force them to leave.” The letter was signed by 32 members of Digital Europe and outlined four recommendations that signatories believe would allow the Act to strike the necessary balance between regulation and innovation.


HR Leaders unleashing retention success through employee well-being

“The pandemic brought the discourse on mental health to the forefront and normalised talk about stress and mental health in all forums. Accordingly, a formalised framework to address the mental health of employees has been put in place. Wellness webinars on these topics are delivered through tie-ups with service providers and in-house subject matter experts. Webinars on mental health are regularly organised with an aim to destigmatise mental health through increasing awareness on topics such as mental health awareness, digital & screen detox and, stress management, etc. We continuously work on instituting policies that are customised as per the individual and life-stage needs of the employees. An employee assistance program, in tie-up with a service provider, is in place to facilitate mental health conversations with qualified professionals. In addition, the employees are nudged to incorporate habits that help take care of their mental well-being as an unconscious part of their lives. Initiatives such as the 'Mental Health Bingo’ card and ‘I De-stress myself by __’ campaigns are launched. 


How generative AI changes the data journey

We see generative AI used in the observability space throughout many industries, especially regarding compliance. Let’s look at healthcare, an industry where you must comply with HIPAA. You are dealing with sensitive information, generating tons of data from multiple servers, and you must annotate the data with compliance tags. An IT team might see a tag that says, “X is impacting 10.5.34 from GDPR…” The IT team may not even know what 10.5.34 means. This is a knowledge gap—something that can very quickly be fulfilled by having generative AI right there to quickly tell you, “X event happened, and the GDPR compliance that you’re trying to meet by detecting this event is Y…” Now, the previously unknown data has turned into something that is human readable. Another use case is transportation. Imagine you’re running an application that’s gathering information about flights coming into an airport. A machine-generated view of that will include flight codes and airport codes. Now let’s say you want to understand what a flight code means or what an airport code means. Traditionally, you would use a search engine to inquire about specific flight or airport codes. 


Banks May Be Ready for Digital Innovation: Many of the Staff Aren’t

A major roadblock to training workers is that many don’t actually bank with their employer. This makes training critical, especially for frontline staff members, says John Findlay, chief executive and founder of digital learning company LemonadeLXP, based in Ontario, Canada. “If their staff doesn’t bank with them, they don’t use the technologies on offer and it’s pretty difficult for them to promote them to customers,” he says. It’s also difficult for them to answer customer questions. Brian McNutt, U.S. vice president of product management at Dutch engagement platform Backbase, says banks should incentivize their staff to actually use their services as much as possible. One approach is to offer special rates or deals to employees, he says. “I think that really the most important thing is that they are customers themselves. There’s really no replacement for that. For somebody to really be able to empathize or understand customers, they have to experience the products themselves.”


The Future of Software Engineering: Transformation With Generative AI

The application of Generative AI in software engineering is not just a technical enhancement but a fundamental change in how software is conceptualized, developed, and maintained. This section delves into the key themes that underline this transformative integration, elucidating the diverse ways in which Generative AI is reshaping the field. Generative AI is revolutionizing the way code is written and maintained. AI models can now understand programming queries in natural language and translate them into efficient code, significantly reducing the time and effort required from human developers. This has several implications:Enhanced productivity: Developers can focus on complex problem-solving rather than spending time on routine coding tasks. Learning and development: AI models can suggest best coding practices and offer real-time guidance, acting as a learning tool for novice programmers. Code quality improvement: With AI's ability to analyze vast codebases, it can recommend optimizations and improvements, leading to higher quality and more maintainable code.


Reports: China’s Alibaba Shuts Down Quantum Lab

DoNews reported this week that Alibaba’s DAMO Academy –Academy for Discovery, Adventure, Momentum and Outlook — has closed down its quantum laboratory due to budget and profitability reasons. The budget ax claimed more than 30 people — possible among China’s brightest quantum researchers — lost their positions, according to the news outlet’s internal sources. For further claims of proof, DoNews reports that the official website of DAMO Academy has also removed the introduction page of the quantum laboratory. According to the story, translated into English: “Insiders claimed that Alibaba’s DAMO Academy Quantum Laboratory had undergone significant layoffs, but it was not clear at that time whether the entire quantum computing team had been disbanded.” Media further suggest that many of the DAMO Academy quantum team members who were laid off have begun to send their resumes to other companies. According to The Quantum Insider’s China’s Quantum Computing Market brief, Alibaba is a diverse tech conglomerate that has been active in quantum since 2015. The company’s Quantum Lab Academy teaching employees and students about the prospects of quantum computing.


It’s time the industry opts for collaborative manufacturing

The transition from an analogue factory to a digital one underscores the necessity of a coherent and efficient digital infrastructure. This transformation extends beyond the primary tasks of manufacturing, adding efficiency at every stage, including the cutting room. Investments in IoT-enabled machinery, though costly, can lead to significant improvements in output and efficiency. ... The technology underlines the importance of integrated planning software, which aids in production planning, order flow management and the efficient consumption of raw materials.” As technology continues to evolve and digitisation gains ground, an important question emerges while making the roadmap: What are the social implications of this technological revolution? In a city like Bengaluru and its surrounding manufacturing hubs, more than 3.5 million women toil in the garment industry, forming the majority of the workforce. Their livelihoods hinge on operating sewing machines, a vocation they might continue for the next two decades. 


The Digital Revolution in Banking: Exploring the Future of Finance

As banks continue to close their physical branches, it becomes crucial to balance the convenience of digital banking and the personalized service that customers crave. While online banking has become increasingly popular, some still prefer the in-person experience of visiting their local branch and interacting with staff. This is especially important when it comes to welcoming new customers. To address this, emerging technologies, such as augmented reality (AR) and virtual reality (VR), may offer a solution to bridge the gap between digital convenience and personalized service. Imagine you are a banking executive looking for ways to improve your customer experience. You know that digital banking is the future, but you also understand that some customers still crave the personalized service of visiting a physical branch. This is where augmented reality (AR) and virtual reality (VR) come in. By incorporating AR into your mobile app, you can enhance the interface and provide customers with more information in an immersive way. 



Quote for the day:

"Success is the sum of small efforts, repeated day-in and day-out." -- Robert Collier

Daily Tech Digest - August 14, 2022

Identity crisis: Artificial intelligence and the flawed logic of ‘mind uploading’

We can think of the copy as a digital clone or twin, but it would not be you. It would be a mental copy of you, including all of your memories up to the moment your brain was scanned. But from that time on, the copy would generate its own memories inside whatever simulated world it was installed in. It might interact with other simulated people, learning new things and having new experiences. Or maybe it would interact with the physical world through robotic interfaces. At the same time, the biological you would be generating new memories and skills and knowledge. In other words, your biological mind and your digital copy would immediately begin to diverge. They would be identical for one instant and then grow apart. Your skills and abilities would diverge. Your knowledge and understanding would diverge. Your personality and objectives would diverge. After a few years, there would be significant differences. And yet, both versions would “feel like the real you.” This is a critical point – the copy would have the same feelings of individuality that you have. 


It’s Time to Normalize Cyberattack Data

The hope is that as an open standard, it will be adopted and used with existing security standards and processes. Then, as developers and users incorporate OCSF into their products and processes, security data normalization will become simpler and less burdensome. This, in turn, will enable security teams to do better at analyzing attack data, identifying threats, and defending their organizations from cyberattacks. Ultimately, John Graham-Cumming, Cloudflare’s CTO, said in a statement, “Every business deserves a simple, straightforward way to analyze and understand the security landscape — and that starts with their data. By participating in the OCSF, we hope to help the entire security industry focus on doing the work that matters instead of wasting countless hours and resources on formatting data.” I hope this is true. I hate wasting time. And time is one thing we never have enough of when we’re dealing with a security problem. If OSCF can succeed in its aims, it will be a major step forward in dealing with large-scale security problems.


3 Expert-Backed Strategies for Boosting Your Entrepreneurial Energy

Entrepreneurs are a special breed of overthinkers. We're constantly making decisions, so we have to think fast on our feet. But we also must take the time to weigh our options out properly. And so we think up all possible scenarios: the good, the bad and the ugly. This used to be one of my biggest hurdles when starting. What if this client falls through? What if users aren't satisfied with our product? What if we can't attract enough attention and be sustainable? What will I do? My mind was my biggest enemy. Consequently, after a long night of tossing and turning, I'd wake up unmotivated to start the day. Here's the thing I've learned since: energy thrives on confidence. And confidence only comes when you believe in your abilities. As co-authors Linda Bloom, L.C.S.W., and Charlie Bloom, M.S.W.. write in Psychology Today, "Self-trust is not trusting yourself to know all the answers, nor is it believing that you will always do the right things," they explain. "It's having the conviction that you will be kind and respectful to yourself regardless of the outcome of your efforts."


4 Flaws, Other Weaknesses Undermine Cisco ASA Firewalls

"If you have access to the virtual machine, you have full access inside the network, but more importantly, you can sniff all the traffic going through, including decrypted VPN traffic," Baines says. "So, it is a really great place for an attacker to chill out and pivot, but probably just sniff for credentials or monitor the traffic flowing into the network." Baines discovered the issue when he was investigating the Cisco ASDM to get "a level set on how the GUI (graphical user interface) works" and pull apart the protocol, he says. A component installed on administrators' systems, known as the ASDM launcher, could be used by attackers to deliver malicious code in Java class files or through the ASDM Web portal. As a result, attackers could create a malicious ASDM package to compromise the administrator's system through installers, malicious Web pages, and malicious Java components. The ASDM vulnerabilities discovered by Rapid7 include a known vulnerability (CVE-2021-1585) that allows an unauthenticated remote code execution (RCE) attack, which Cisco claimed was patched in a recent update, but Baines discovered it remained.


A Shift in Computer Vision Is Coming

Is computer vision about to reinvent itself, again? Ryad Benosman, professor of ophthalmology at the University of Pittsburgh and an adjunct professor at the CMU Robotics Institute, believes that it is. As one of the founding fathers of event-based vision technologies, Benosman expects that neuromorphic vision — computer vision based on event-based cameras — will be the next direction computer vision will take. “Computer vision has been reinvented many, many times,” Benosman said. “I’ve seen it reinvented twice at least, from scratch, from zero.” Benosman cited the shift in the 1990s from image processing with a bit of photogrammetry to a geometry-based approach and then to today’s rapid advance toward machine learning. Despite those changes, modern computer-vision technologies are still predominantly based on image sensors — cameras that produce an image similar to what the human eye sees. According to Benosman, until the image-sensing paradigm is no longer useful, it holds back innovation in alternative technologies. The development of high-performance processors, such as GPUs, delay the need to look for alternative solutions and thus have prolonged this effect.


What’s the Go programming language really good for?

Go has been compared to scripting languages like Python in its ability to satisfy many common programming needs. Some of this functionality is built into the language itself, such as “goroutines” for concurrency and threadlike behavior, while additional capabilities are available in Go standard library packages, like Go’s http package. Like Python, Go provides automatic memory management capabilities including garbage collection. Unlike scripting languages such as Python, Go code compiles to a fast-running native binary. And unlike C or C++, Go compiles extremely fast—fast enough to make working with Go feel more like working with a scripting language than a compiled language. Further, the Go build system is less complex than those of other compiled languages. It takes few steps and little bookkeeping to build and run a Go project. ... Go binaries run more slowly than their C counterparts, but the difference in speed is negligible for most applications. Go performance is as good as C for the vast majority of work, and generally much faster than other languages known for speed of development.


Ex-CIA security boss predicts coming crackdown on spyware

Protecting individuals' privacy is something all of us — including elected officials — should be very concerned about, Mestrovich said. "I would expect, going forward, there will be either executive orders or legislation passed to ensure that the civil liberties and the rights that we all expect to data privacy and privacy of our own activities are kept sacrosanct," he added. As a CISO himself, ransomware is top of mind. "Ransomware is a huge threat to just our economic viability," Mestrovich told us, citing a Cybersecurity Ventures forecast that global cybercrime costs to grow by 15 percent per year over the next five years, reaching $10.5 trillion annually by 2025. "Clearly, the cyber criminals have monetized the theft of data or depriving an organization use of its data," Mestrovich said. "Until we can do something to prevent the economic gain that they have from the theft of data or the denial of an organization's access to his data. This is only going to increase"


Urgent security warning issued as hackers shift ransomware attacks to small businesses

The Director of the NCSC Richard Browne said that in the past these groups typically focussed on larger organisations. However they have now shifted focus to smaller entities. “We have been dealing with the threat of ransomware for some time; however, we have seen a noticeable change in the tactics of criminal ransomware groups, whereby rather than largely focussing on Governments, critical infrastructure and big business, they are increasingly targeting smaller businesses. “This is a trend that has been observed globally, and Ireland is no exception with several businesses becoming victims of these groups in the past number of weeks,” he said. Richard Browne said the letter sent to IBEC by the NCSC and GNCCB has outlined guidance for small companies and how they can deal with the attack. “Whilst we appreciate that many business owners are understandably nervous of the threat ransomware poses, there are some straightforward security measures that can be put in place to ensure that an organisations data and systems remain secure,” he added.


Computer Vision and Deep Learning for Agriculture

AI applications can analyze weather and soil conditions, water usage, and risk of diseases to help farmers reduce the risk of crop failures by providing valuable insights like the right time to sow seeds, right crop/seed choices. Detecting plant diseases, weeds, and pests beforehand can reduce the use of chemicals like herbicides and pesticides and bring cost savings. Many companies have started using robots that can eliminate 80% of the volume of the substances generally sprayed on the crops and bring down the expenditure on herbicides by 90% Further, the use of AI in harvesting, picking, and vacuum apparatus can quickly identify the location of the harvestable produce and help determine the proper fruits. The Strawberry Harvest is a classic example. ... With satellite imagery and weather data, AI applications can analyze the market trends, like which crops are in demand and which are more profitable. This helps the farmers to increase their revenue by guiding them about future price patterns, demand level, type of crop to sow for maximum benefit, pesticide usage, etc.


Rethinking Web Application Firewalls

The vulnerabilities are so numerous now and cloud native applications have larger attack surfaces with no way to mitigate vulnerabilities using traditional means, Tiperneni explained. “It’s no longer sufficient to throw out a report that tells you about all the vulnerabilities in your system,” Tiperneni said. “Because that report is not actionable. People operating the services are discovering that the amount of time and effort it takes to remediate all these vulnerabilities is incredible, right? So they’re looking for some level of prioritization in terms of where to start.” And the onus is on the user to mitigate the problem, Tiperneni said. Those customers have to think about the blast radius of the vulnerability and its context in the system. The second part: How to manage the attack surface. In this world of cloud native applications, customers are discovering very quickly, that trying to protect every single thing, when everything has access to everything else, is an almost impossible task, Tiperneni said.



Quote for the day:

"The Leadership Seduction of storytelling invites self-pity, exaggerates one's importance, and encourages inaction." -- Catherine Robinson-Walker

Daily Tech Digest - January 30, 2022

Machine learning is going real-time: Here's why and how

ML systems need to have two components to be able to do that, Huyen notes. They need fast inference, i.e. models that can make predictions in the order of milliseconds. And they also need real-time pipelines, i.e. pipelines that can process data, input it into models, and return a prediction in real-time. To achieve faster inference, Huyen goes on to add, models can be made faster, they can be made smaller, or hardware can be made faster. The focus on inference, TinyML, and AI chips that we've been covering in this column is perfectly aligned to this, and naturally, these approaches are not mutually exclusive either. Huyen also embarked on an analysis on streaming fundamentals and frameworks, something that has also seen wide coverage on this column from early on. Many companies are switching from batch processing to stream processing, from request-driven architecture to event-driven architecture, and this is tied to the popularity of frameworks such as Apache Kafka and Apache Flink. This change is still slow in the US but much faster in China, Huyen notes.


Whistleblowers can protect crypto and DeFi

While the industry frets over this counterrevolution of sorts, crypto insiders who report fraud and illegal activity to the government could see significant upside. Regulators, such as the SEC, the CFTC, the Financial Crimes Enforcement Network, and the Internal Revenue Service, need whistleblowers who can provide an inside look at the operations of a company or industry segment, helping regulators identify fraud and illegal activities well before wrongdoers irreparably injure investors, customers and the public. Information from insiders can also help regulators target their enforcement actions and rulemaking to address the worst actors in the space, which can help prevent regulators from unnecessarily quashing innovative and valuable aspects of the cryptocurrency industry. In exchange for this information, whistleblowers can earn awards under various federal whistleblower rewards programs, provided the whistleblower properly filed a tip that contributed to a qualifying enforcement action. In the case of the SEC and CFTC programs, and now the newly enhanced AML whistleblower program, a whistleblower can receive an award of up to 30% of an enforcement action of more than $1 million.


Remove System Complexity with The “Impedance Mismatch Test”

Everyone has data pipelines compiled of lots of different systems. Some may even look very sophisticated on the surface, but the reality is there’s lots of complexity to them––and maybe unnecessarily so. Between the plumbing work to connect different components, the constant performance monitoring required, or the large team with unique expertise to run, debug and manage them, all these factors can add time-to-market delays and operational overhead for product teams. And that’s not all. The more systems you use, the more places you are duplicating your data, which increases the chances of data going out-of-sync or stale. Further, since components may be developed independently by different companies, the upgrades or bug fixes might break your pipeline and data layer. ... The variables such as the data format, schema and protocol add up to what’s called the “transformation overhead.” Other variables like performance, durability and scalability add up to what’s called the “pipeline overhead.” Put together, these classifications contribute to what’s known as the “impedance mismatch.” 


New SEC Proposal Could Be a Disaster for DeFi Exchanges

Under this new definition, decentralized exchanges such as Uniswap would be subject to SEC regulations and would therefore need to register with the SEC as a securities broker. As decentralized exchanges have no way of complying with the current demands placed on securities exchanges by the SEC, the new legislation would effectively kill decentralized exchanges operating within the United States. DeFi enthusiast Gabriel Shapiro highlighted the potential devastating effects of the proposal in a blog post, noting that “because the proposal achieves this expansion by providing new restraints on ‘communication protocols,’ I believe it may also be unconstitutional as a restraint on free speech,” taking a strong stance against the proposed changes. He also suggested that under the new definition, the SEC could class block explorers, such as Etherscan, as securities exchanges because they allow users to interact with smart contracts to communicate trading interests. Shapiro is not the only prominent figure to come out against the SEC’s proposed legislation. 


Accessing And Retaining Knowledge Is Vital For Businesses In The Era Of The Great Reshuffle

In many businesses, when an employee moves to a new job, all that’s left behind is a digital shadow. Their knowledge, expertise and experience disappear, and new hires and old colleagues alike struggle to fill the gaps. A trail of data breadcrumbs that lead to nowhere — old messages, outdated docs and dusty email chains — are often all busy ex-teammates are left to rely on. As a result, business productivity suffers. Of course, this isn’t the fault of the person who has moved roles. Their expertise belongs to them, and too often, organizations undervalue that expertise, further fuelling resignations. It’s in the hands of businesses to do more to retain business-critical knowledge and smooth the transition for new teammates. Nobody should be having to rely on guesswork from day one. And if they are, chances are they too won’t stick around for long. To overcome these challenges, we need to think innovatively and start optimizing our tech stacks to reduce knowledge drain and fast-track problem-solving. The solution isn’t more collaboration or communication apps. 


FBI Reportedly Considered Buying NSO Spyware

The yearlong investigation by Bergman and Mazzetti also alleges that a group of Israeli computer engineers arrived at a New Jersey building used by the bureau in June 2019 and started testing their equipment. The report alleges that the FBI had bought a version of Pegasus, NSO’s premier spying tool. "For nearly a decade, the Israeli firm had been selling its surveillance software on a subscription basis to law-enforcement and intelligence agencies around the world, promising that it could do what no one else - not a private company, not even a state intelligence service - could do: consistently and reliably crack the encrypted communications of any iPhone or Android smartphone," says the NYT report. As part of their training on the tool, bureau employees bought new smartphones, with SIM cards from other countries. This version of Pegasus that the FBI bought was zero click, i.e. it did not require users to click on a malicious attachment or link - so the users in the U.S. monitoring phones could see no evidence of an ongoing breach.


Zero Trust is hard but worth it

Keeping software updated is key to applying both these rules, and unfortunately that’s often a problem for enterprises. Desktop software, particularly with WFH, is always a challenge to update, but a combination of centralized software management and a scheduled review of software versions on home systems can help. For operations tools, don’t be tempted to skip versions in open source tools just because they seem to happen a lot. It’s smart to include a version review of critical operations software as part of your overall program of software management and take a close look at new versions at least every six months. Even with all of this, it’s unrealistic to assume that an enterprise can anticipate all the possible threats posed by all the possible bad actors. Preventing disease is best, but treating it once symptoms arise is essential, too. The most underused security principle is that preventing bad behavior means understanding good behavior. Whatever the source of a security problem, it almost always means that something is doing something it shouldn’t be. How can we know that? By watching for different patterns of behavior.


Apache Airflow and the New Data Engineering

The ELT steps can seem simple enough on the surface, but with a lot of moving parts, an increasing number of sources and increasing ways to use the data, a lot can go wrong. Data engineers need to contend with complex scheduling requirements, creating dependencies between tasks, figuring out what can run in parallel and what needs to run in series, what makes for a successful task run, how to checkpoint tasks and handle failures and restarts, how to check data quality, how and who to alert on fails -- all the stuff Airflow was designed to handle. The cloud only makes that process more complicated, with cloud buckets used to stage data from sources before loading that data into cloud-based distributed data management systems like Snowflake, Google Cloud Platform or Databricks. And here’s what I think is important: For many organizations, making the leap from exploratory data analysis [EDA] to formalizing what’s found into data pipelines has become increasing valuable.


Web3’s early promise for artists tainted by rampant stolen works and likenesses

Ironically, the decentralized markets selling NFTs are starting to centralize around one or two providers. One of the most popular, OpenSea, has a full takedown team dedicated to situations like York’s or Quinni’s. The company has taken off, reaching a stratospheric $13 billion valuation after a $300 million round in early January. The company is far and away the biggest player in the NFT market, with an estimated 1.26 million active users and over 80 million NFTs. According to DappRadar, the platform took in $3.27 billion in transactions in the last 30 days and managed 2.33 million transactions. Its nearest competitor, Rarible, saw $14.92 million in transactions in the same period. ... Interestingly, the company also seems to be cracking down on deep fakes or, as OpenSea calls it, non-consensual intimate imagery (NCII), a problem that hasn’t surfaced widely yet but could become pernicious for influencers and media stars. “We have a zero-tolerance policy for NCII,” they said. “NFTs using NCII or similar images (including images doctored to look like someone that they are not) are prohibited, and we move quickly to ban accounts that post this material.


Understanding Web3's Supporting Blockchain Technology

The benefits of a decentralized network are varied, but because they don’t have to go through a “trusted party,” nobody has to know or trust anyone else. Every person in the network has a copy of the distributed ledger which contains the exact same data. If a person’s ledger is altered or corrupted, it will be rejected by the other members in the network. One of the cons of a decentralized network is that the more members that are in a network, the slower the network tends to be. In decentralized blockchain systems, unlike distributed systems, security is prioritized over performance. When a blockchain network scales up or out, while the network becomes more secure, performance slows down. This is because every member node has to validate all of the data that is being added to the ledger. “Most references place blockchain squarely in the realm of currencies or finances, but the applicability is far greater,” said Perella.“When the world wide web came about, most websites were maintained by individuals or groups hosting their own systems and data. This format would eventually become known as Web 1.0. 



Quote for the day:

Integrity is the soul of leadership! Trust is the engine of leadership! - Amine A. Ayad