Showing posts with label distributed ledger. Show all posts
Showing posts with label distributed ledger. Show all posts

Daily Tech Digest - May 06, 2026


Quote for the day:

"Little minds are tamed and subdued by misfortune; but great minds rise above it." -- Washington Irving

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 21 mins • Perfect for listening on the go.


The Architect Reborn

In "The Architect Reborn," Paul Preiss argues that the technology architecture profession is experiencing a significant resurgence after fifteen years of structural decline. He explains that the rise of Agile methodologies and the "three-in-a-box" delivery model—comprising product owners, tech leads, and scrum masters—mistakenly rendered the architect role as a redundant expense or a "tax" on speed. This industry shift led many senior developers to pivot toward "engineering" titles while neglecting essential cross-cutting concerns, resulting in massive technical debt and systemic instabilities, exemplified by high-profile failures like the 2024 CrowdStrike outage. However, the current explosion of AI-generated code has created a critical need for human oversight that automated tools cannot replicate. Organizations are rediscovering that they require skilled architects to manage complex quality attributes—such as security, reliability, and maintainability—and to bridge the gap between business strategy and technical execution. By leveraging the five pillars of the Business Technology Architecture Body of Knowledge (BTABoK), the reborn architect ensures that systems are designed with long-term viability and strategic purpose in mind. Ultimately, Preiss suggests that as AI disrupts traditional coding roles, the architect’s unique ability to provide business context and disciplined design is becoming the most vital asset in the modern technology landscape.


Supply-chain attacks take aim at your AI coding agents

The emergence of autonomous AI coding agents has introduced a sophisticated new frontier in software supply chain security, as evidenced by recent attacks targeting these systems. Security researchers from ReversingLabs have identified a campaign dubbed "PromptMink," attributed to the North Korean threat group "Famous Chollima." Unlike traditional social engineering that targets human developers, these adversaries utilize "LLM Optimization" (LLMO) and "knowledge injection" to manipulate AI agents. By crafting persuasive documentation and bait packages on registries like NPM and PyPI, attackers increase the likelihood that an agent will autonomously select and integrate malicious dependencies into its projects. This threat is further exacerbated by "slopsquatting," where attackers register package names that AI agents frequently hallucinate. Once installed, these malicious components can grant attackers remote access through SSH keys or facilitate the exfiltration of sensitive codebases. Because AI agents often operate with high-level system privileges, the risk of rapid, automated compromise is significant. To mitigate these vulnerabilities, organizations must implement rigorous security controls, including mandatory developer reviews for all AI-suggested dependencies and the adoption of comprehensive Software Bill of Materials (SBOM) practices. Ultimately, while AI agents offer productivity gains, their integration into development pipelines requires a "trust but verify" approach to prevent large-scale supply chain poisoning.


Why disaster recovery plans fail in geopolitical crises

In "Why Disaster Recovery Plans Fail in Geopolitical Crises," Lisa Morgan explains that traditional disaster recovery (DR) strategies are increasingly inadequate against the cascading disruptions of modern warfare and global instability. Historically, DR plans have relied on "known knowns" like localized hardware failures or natural disasters, but the blurring line between private enterprise and nation-state conflict has introduced unprecedented risks. Recent drone strikes on data centers in the Middle East demonstrate that physical infrastructure is no longer immune to military action. Furthermore, the rise of "techno-nationalism" and strict data sovereignty laws significantly complicates geographic failover, as transiting data across borders can now lead to legal and regulatory violations. Modern resilience requires CIOs to shift from static IT playbooks to cross-functional business capabilities involving legal, risk, and compliance teams. The article also highlights how AI-driven resource constraints, particularly in energy and silicon, exacerbate these vulnerabilities. It is critical that organizations move beyond simple redundancy toward adaptive architectures that can withstand simultaneous infrastructure failures and prioritize employee safety in conflict zones. Ultimately, today’s CIOs must adopt the mindset of military strategists, conducting robust tabletop exercises that challenge existing assumptions and prepare for the total, non-linear disruptions characteristic of the current geopolitical climate.


The immutable mountain: Understanding distributed ledgers through the lens of alpine climbing

The article "The Immutable Mountain" utilizes the high-stakes environment of alpine climbing on Ecuador’s Cayambe volcano to explain the sophisticated mechanics of distributed ledgers. Moving away from traditional centralized command-and-control structures, which often represent single points of failure, the author illustrates how expedition rope teams function as autonomous nodes. Each team possesses the authority to make critical, real-time decisions, mirroring the decentralized nature of blockchain technology. This structure ensures that information is not merely passed down a hierarchy but is synchronized across a collective network, fostering operational resilience and organizational agility. Key technical concepts like consensus are framed through the lens of climbers reaching a shared agreement on route safety, while immutability is compared to the permanent, unalterable nature of a daily trip report. By adopting this "composable authoritative source," modern enterprises can achieve radical transparency and maintain a singular, verifiable version of the truth across disparate departments and external partners. Ultimately, the piece argues that the true power of a distributed ledger lies not in its complex code, but in a foundational philosophy of collective trust. This paradigm shift allows organizations to navigate volatile global markets with the same discipline and absolute reliability required to survive the "death zone" of a mountain summit.


Train like you fight: Why cyber operations teams need no-notice drills

The article "Train like you fight: Why cyber operations teams need no-notice drills" argues that traditional, scheduled tabletop exercises fail to prepare cybersecurity teams for the intense psychological stress of a real-world incident. While planned exercises satisfy compliance, they lack the "threat stimulus" necessary to engage the sympathetic nervous system, which can suppress executive function when a genuine crisis occurs. Drawing on medical training at Level 1 trauma centers and research by psychologist Donald Meichenbaum, the author advocates for "no-notice" drills as a form of stress inoculation. This approach, rooted in the Yerkes-Dodson principle, shifts incident response from a document-heavy process to a conditioned physiological response by raising the threshold at which stress impairs performance. By surprising teams with realistic anomalies, organizations can uncover critical operational gaps—such as communication breakdowns, cross-functional latency, or outdated escalation contacts—that remain hidden during predictable tests. Furthermore, these drills foster psychological safety and trust, as teams learn to navigate ambiguity together without fear of blame through blameless post-mortems. Ultimately, the article maintains that the temporary discomfort of a surprise drill is a necessary investment, as failing during practice is far less damaging than failing during a real breach when the damage clock is already running.


The Art of Lean Governance: Developing the Nerve Center of Trust

Steve Zagoudis’s article, "The Art of Lean Governance: Developing the Nerve Center of Trust," explores the transformation of data governance from a static, policy-driven framework into a dynamic, continuous control system. He argues that the foundation of modern data integrity lies in data reconciliation, which should be elevated from a mere back-office correction mechanism to the primary control for enterprise data risk. By embedding reconciliation directly into data architecture, organizations can establish a "nerve center of trust" that operates at the same cadence as the data itself. This shift is particularly crucial for AI readiness, as the effectiveness of artificial intelligence is fundamentally defined by whether data can be trusted at the moment of use. Without this systemic trust, AI risks accelerating organizational errors rather than providing a competitive advantage. Zagoudis critiques traditional governance for being too episodic and manual, advocating instead for a lean approach that provides automated, evidence-based assurance. Ultimately, lean governance fosters a culture where data is a reliable asset for defensible decision-making. By operationalizing trust through disciplined execution and architectural integration, institutions can move beyond conceptual alignment to achieve genuine agility and accuracy in an increasingly data-driven landscape, ensuring that their technological investments yield meaningful results.


Narrative Architecture: Designing Stories That Survive Algorithms

The Forbes Business Council article, "Narrative Architecture: Designing Stories That Survive Algorithms," critiques the modern trend of platform-first storytelling, where brands prioritize distribution and algorithmic trends over substantive identity. This reactionary approach often leads to "identity erosion," as content becomes ephemeral and dependent on shifting digital environments. To combat this, the author introduces "narrative architecture" as a vital strategic asset. This framework acts as a brand's "home base," grounding all content in a coherent core story that defines the organization’s history, values, and fundamental purpose. Rather than letting algorithms dictate their messaging, brands should use them as tools to inform a pre-established narrative. By shifting focus from fleeting visibility to deep-rooted credibility, companies can build lasting trust with audiences, investors, and potential employees. The article argues that stories built on solid narrative architecture possess a unique longevity that extends far beyond digital platforms, manifesting in conference invitations, earned media coverage, and consistent internal brand alignment. Ultimately, while platform-optimized content might gain temporary engagement, a well-architected story ensures a brand remains relevant and respected even as algorithms evolve, securing long-term reputation and sustainable business success in an increasingly crowded digital landscape.


Zero Trust in OT: Why It's Been Hard and Why New CISA Guidance Changes Everything

The Nozomi Networks blog post titled "Zero Trust in OT: Why It’s Been Hard and Why New CISA Guidance Changes Everything" examines the historic friction and recent transformative shifts in applying Zero Trust (ZT) principles to operational technology. While ZT has matured within IT, extending it to industrial environments like SCADA systems and critical infrastructure has long been hindered by significant technical and cultural hurdles. Traditional IT security controls—such as active scanning, encryption, and aggressive network isolation—often disrupt real-time industrial processes, posing severe risks to safety, system uptime, and equipment integrity. However, the author emphasizes that the April 2026 release of CISA’s "Adapting Zero Trust Principles to Operational Technology" guide marks a pivotal turning point. This collaborative framework, developed alongside the DOE and FBI, validates unique industrial constraints by prioritizing physical safety and availability over mere data protection. By advocating for specialized, "OT-safe" strategies—including passive monitoring, protocol-aware visibility, and operationally-aware segmentation—the guidance removes years of ambiguity for practitioners. Ultimately, the blog argues that Zero Trust has evolved from an IT concept forced onto the factory floor into a practical, resilient framework designed to protect the physical processes essential to modern society without sacrificing operational integrity.


The expensive habits we can't seem to break

The article "The Expensive Habits We Can't Seem to Break" explores critical management failures that continue to hinder organizational success, focusing on three persistent mistakes. First, it critiques the tendency to treat culture as a mere communications exercise. Instead of relying on glossy value statements, the author argues that culture is defined by lived experiences and managerial responses during crises. Second, the piece highlights the costly underinvestment in the middle manager layer. With research showing that a significant portion of voluntary turnover is preventable through better management, the author notes that managers are often overextended and undersupported, lacking the necessary tools for "people stewardship." Finally, the article addresses the confusion between flexibility and autonomy. The return-to-office debate often misses the mark by focusing on location rather than trust. Organizations that dictate mandates rather than co-creating norms risk losing critical talent who seek agency over their work. Ultimately, bridging these gaps requires a move away from superficial fixes toward deep-seated changes in leadership behavior and employee trust. By addressing these "expensive habits," HR leaders can foster psychologically safe environments that drive retention and long-term performance, ensuring that organizational values are authentically integrated into the daily reality of the workforce.


The tech revolution that wasn’t

The MIT News article "The tech revolution that wasn't" explores Associate Professor Dwai Banerjee’s book, Computing in the Age of Decolonization: India's Lost Technological Revolution. It details India’s early, ambitious attempts to achieve technological sovereignty following independence, exemplified by the 1960 creation of the TIFRAC computer at the Tata Institute of Fundamental Research. Despite being a state-of-the-art machine built with minimal resources, the TIFRAC never reached mass production. Banerjee examines how India’s vision of becoming a global hardware manufacturing powerhouse was derailed by geopolitical constraints, limited knowledge sharing from the U.S., and a pivotal domestic shift in the 1970s and 1980s toward the private software services sector. This transition favored quick profits through outsourcing over the long-term investment required for R&D and manufacturing. Consequently, India became a leader in offshoring talent rather than a primary innovator in computer hardware. Banerjee challenges the common "individual genius" narrative of tech history, emphasizing instead that large-scale global capital and institutional support are the true determinants of success. Ultimately, the book uses India’s experience to illustrate the enduring, unequal power structures that continue to shape technological advancement in post-colonial nations, where the promise of a sovereign digital revolution was traded for a role in the global services economy.

Daily Tech Digest - April 27, 2024

AI twins and the digital revolution

The digital twin is designed to work across a single base station with a few mobile devices all the way up to hundreds of base stations with thousands of devices. “I would say the RF propagation piece is perhaps one of the most exciting areas apart from the data collection,” Vasishta noted. “The ability to simulate at scale real antenna, including interface interference and other elements, data is where we’ve really spent the most time to make sure that it is an accurate implementation”. The platform also includes a software-defined, full RAN stack to allow researchers and members to customise, programme and test 6G network components in real time. Vendors, such as Nokia, can bring their own RAN stack to the platform, but Nvidia’s open RAN compliant stack is provided. Vasishta added users of the research platform can collect data from their digital twin within their channel model, which allows them to train for optimisation. “It now allows you to use AI and machine learning in conjunction with a digital twin to fully simulate an environment and create site specific channel models so you can always have best connectivity or lowest power consumption, for instance,” he said.


The temptation of AI as a service

AWS has introduced a new feature aimed at becoming the prime hub for companies’ custom generative AI models. The new offering, Custom Model Import, launched on the Amazon Bedrock platform (enterprise-focused suite of AWS) and provides enterprises with infrastructure to host and fine-tune their in-house AI intellectual property as fully managed sets of APIs. This move aligns with increasing enterprise demand for tailored AI solutions. It also offers tools to expand model knowledge, fine-tune performance, and mitigate bias. All of these are needed to drive AI for value without increasing the risk of using AI. In the case of AWS, the Custom Model Import allows model integrations into Amazon Bedrock, where they join other models, such as Meta’s Llama 3 or Anthropic’s Claude 3. This provides AI users the advantage of managing their models centrally alongside established workflows already in place on Bedrock. Moreover, AWS has announced enhancements to the Titan suite of AI models. The Titan Image Generator, which translates text descriptions into images, is shifting to general availability.


Overwhelmed? 6 ways to stop small stresses at work from becoming big problems

"Someone once used the analogy that you have crystal balls and bouncy balls. If you drop your crystal ball, it shatters, and you'll never be able to get it back. Whereas if you drop your bouncy ball, it will bounce back." "I think you need to work out the crystal balls to prioritize because if you drop that ball, it's gone. For me, it always helps to take stuff off the priority list. And I think that approach helps with work/life balance. Sometimes, it's important to choose." ... "If we have a small problem in one store, and we pick up that's prevalent in all stores, collectively the impact is significant. So, that's why I get to the root cause as quickly as possible."  "And then you understand what's going on rather than just trying to stick a plaster over what appears to be a cut, but is something quite a bit deeper underneath." ... "If you look at something in darkness, it can feel pretty overwhelming quickly. So, giving a problem focus and attention, and getting some people around it, tends to put the issue in perspective."  ... "It's nice to have someone who can point out to you, 'You're ignoring that itch, why don't you do something about it?' I've found it's good to speak with an expert with a different perspective."


15 Characteristics of (a Healthy) Organizational Culture

Shared common values refer to the fundamental beliefs and principles that an organization adopts as its foundation. These values act as a compass, guiding behaviors, decision-making, and interactions both within the organization and with external stakeholders. They help create a cohesive culture by aligning employees’ actions with the company’s core mission and vision. ... A clear purpose and direction align the organization’s efforts and goals. This clarity helps unite the team, focusing their efforts on achieving specific objectives and guiding strategic planning and daily operations. ... Transparent and regular communication supports openly sharing information and feedback throughout the organization. This practice fosters trust, helps in early identification of issues, encourages collaboration, and ensures that everyone is informed and aligned with the organization’s goals. ... Collaboration and teamwork underpin a cooperative environment where groups work together to achieve collective objectives. This approach enhances problem-solving, innovation, and efficiency, while also building a supportive work environment.


Palo Alto Networks’ CTO on why machine learning is revolutionizing SOC performance

When it comes to data center security, you have to do both. You have to keep them out. And that’s the role of traditional cybersecurity. So network security, including, of course, the security between the data center and Ethernet, internal security for segmentation. It includes endpoint security for making sure that vulnerabilities aren’t being exploited and malware isn’t running. It includes identity and access management. Or even privileged access management (PAM), which we don’t do. We don’t do identity access or PAM. It includes many different things. This is about keeping them out or not letting them walk inside laterally. And then the second part of it which, which goes to your question, is now let’s assume they are inside and all defenses have failed. It’s the role of the SOC to look for them. We call it hunting, the hunting function in the SOC. How do we do that? You need machine learning, not [large language models] LLMs, or GPT, but real, traditional machine learning, to do both, both to keep them out and also both to find them if they’re already inside. So we can talk about both and how we use machine learning here and how we use machine learning there.


From hyperscale to hybrid: unlocking the potential of the cloud

To optimize their cloud adoption strategies, and ensure they architect the best fits for their needs, organizations will first need to undertake detailed assessments of their workloads to determine which cloud combinations to go for. Weighing up which cloud options are most appropriate for which workloads isn’t always an easy task. Ideally, organizations should utilize a cloud adoption framework to help scope out the overall security and business drivers that will influence their cloud strategy decision-making. Enabling organizations to identify and mitigate risks and ensure compliance as they move ahead, these frameworks make it easier to proceed confidently with their cloud adoption plans. Since every infrastructure strategy will have unique requirements that include tailored security measures, leveraging the expertise of cloud security professionals will also prove invaluable for ensuring appropriate security measures are in place. Similarly, organizations will be able to gain a deeper understanding of how best to orchestrate their on-premises, private, and public clouds in a unified and cost/performance-optimized manner.


No Fear, AI is Here: How to Harness AI for Social Good

We must proactively think about how our organizations can responsibly leverage AI for good. Our role is to offer our teams the support and guidance required to harness AI’s full power in ways big and small to inspire positive change, ensuring fear doesn’t override optimism. While AI has an undeniable advantage when it comes to its ability to outperform, it cannot replace the power of human creativity, perspectives, and deep insight. ... We only have a finite amount of time to address climate change and related issues such as poverty and inequity. To get there, we’re going to have to try. And then try again. And again. Though it will be an uphill climb, AI can help us climb faster -- and explore as many options as we possibly can, as quickly as we can -- if we use it responsibly. The key is for tech impact leaders to bring forward a human-centric perspective to their company’s investments and use of AI technology, ensuring that their strategies don’t lead to unintended consequences for employment. Don’t let fear prevent you from getting all the help you can from the most powerful technology available. Your team, and the world, need you to be fearless.


Data for AI — 3 Vital Tactics to Get Your Organization Through the AI PoC Chasm

We now have the opportunity to automate manual heavy lifting in data prep. AI models can be trained to detect and strip out sensitive data, identify anomalies, infer records of source, determine schemas, eliminate duplication, and crawl over data to detect bias. There is an explosion of new services and tools available to take the grunt work out of data prep and keep the data bar high. By automating these labor-intensive tasks, AI empowers organizations to accelerate data preparation, minimize errors, and free up valuable human resources to focus on higher-value activities, ultimately enhancing the efficiency and effectiveness of AI initiatives. ... AI is being experimented with and adopted broadly across organizations. With so much activity and interest, it is difficult to centralize work, and often centralization creates bottlenecks that slow down innovation. Encouraging decentralization and autonomy in delivering AI use cases is beneficial as it increases capacity for innovation across many teams, and embeds work into the business with a focus on specific business priorities. 


Blockchain Distributed Ledger Market 2024-2032

Organizations are leveraging blockchain's decentralized and immutable ledger capabilities to enhance transparency, security, and efficiency in their operations. Secondly, the growing demand for secure and transparent transactions, coupled with the rising concerns over data privacy and cybersecurity, is driving the adoption of blockchain distributed ledger solutions. Businesses and consumers alike are increasingly turning to blockchain technology to safeguard their data and assets against cyber threats and fraudulent activities. Moreover, the proliferation of digitalization and the internet of things (IoT) is further driving market growth by creating a demand for reliable and tamper-proof data storage and transmission systems. Blockchain's ability to provide a decentralized and verifiable record of transactions makes it well-suited for IoT applications, such as supply chain tracking, smart contracts, and secure data sharing. Additionally, the emergence of regulatory frameworks and standards for blockchain technology adoption is providing a favorable environment for market expansion, as it instills confidence among businesses and investors regarding compliance and legal aspects.


The real cost of cloud banking

Although compute and memory costs have come down and capacities have gone up over the years, the fact is an inefficient piece of software will still cost you more today than a well-designed, optimised one. Before it was a simple case of running your software and checking how much memory you were using. With cloud, the pricing may be transparent, but the options available and cost calculation are much more complex. Such is the complexity involved with cloud costs that many banks bring in specialists to help them optimise their spend. In fact, it’s not only banks, but banking software providers too. Cloud cost optimisation is a fine art that requires time and expertise to fully understand. It would be easy to blame developers, but I’ve never seen business requirements that state that applications should minimise their memory use or use the least expensive type of storage. I’ve been in the position of “the business” needing to make decisions on requirements for storage options, and these decisions aren’t easy, even for someone with a technical background. In defence of cloud providers, their pricing is transparent. 



Quote for the day:

“The road to success and the road to failure are almost exactly the same.” -- Colin R. Davis

Daily Tech Digest - March 23, 2024

The tech tightrope: safeguarding privacy in an AI-powered world

The only means of truly securing our privacy is through proactive enforcement of the utmost secure and novel technological measures at our disposal, those that ensure a strong emphasis on privacy and data encryption, while still enabling breakthrough technologies such as generative AI models and cloud computing tools full access to large pools of data in order to meet their full potential. Protecting data when it is at rest (i.e., in storage) or in transit (i.e., moving through or across networks) is ubiquitous. The data is encrypted, which is generally enough to ensure that it remains safe from unwanted access. The overwhelming challenge is how to also secure data while it is in use. ... One major issue with Confidential Computing is that it cannot scale sufficiently to cover the magnitude of use cases necessary to handle every possible AI model and cloud instance. Because a TEE must be created and defined for each specific use case, the time, effort, and cost involved in protecting data is restrictive. The bigger issue with Confidential Computing, though, is that it is not foolproof. The data in the TEE must still be unencrypted for it to be processed, opening the potential for quantum attack vectors to exploit vulnerabilities in the environment.


Ethical Considerations in AI Development

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to guarantee better conditions for the development and use of this innovative technology. Parliament’s priority is to ensure that AI systems used in the EU are secure, transparent, traceable, non-discriminatory, and environmentally friendly. AI systems must be overseen by people, rather than automation, to avoid harmful outcomes. The European Parliament also wants to establish a uniform and technologically neutral definition of AI that can be applied to future AI systems. “It is a pioneering law in the world,” highlighted Von Der Leyen, who celebrates that AI can thus be developed in a legal framework that can be “trusted.” The institutions of the European Union have agreed on the artificial intelligence law that allows or prohibits the use of technology depending on the risk it poses to people and that seeks to boost the European industry against giants such as China and the United States. The pact was reached after intense negotiations in which one of the sensitive points has been the use that law enforcement agencies will be able to make of biometric identification cameras to guarantee national security and prevent crimes such as terrorism or the protection of infrastructure.


FBI and CISA warn government systems against increased DDoS attacks

The advisory has grouped typical DoS and DDoS attacks based on three technique types: volume-based, protocol-based, and application layer-based. While volume-based attacks aim to cause request fatigue for the targeted systems, rendering them unable to handle legitimate requests, protocol-based attacks identify and target the weaker protocol implementations of a system causing it to malfunction. A novel loop DoS attack reported this week targeting network systems, using weak user datagram protocol (UDP)-based communications to transmit data packets, is an example of a protocol-based DoS attack. This new technique is among the rarest instances of a DoS attack, which can potentially result in a huge volume of malicious traffic. Application layer-based attacks refer to attacks that exploit vulnerabilities within specific applications or services running on the target system. Upon exploiting the weaknesses in the application, the attackers find ways to over-consume the processing powers of the target system, causing them to malfunction. Interestingly, the loop DoS attack can also be placed within the application layer DoS category, as it primarily attacks the communication flaw in the application layer resulting from its dependency on the UDP transport protocol.


The Future of AI: Hybrid Edge Deployments Are Indispensable

Deploying AI models locally eliminates dependence on external network connections or remote servers, minimizing the risk of downtime caused by maintenance, outages or connectivity issues. This level of resilience is particularly critical in sectors like healthcare and other sensitive industries where uninterrupted service is absolutely critical. Edge deployments also ensure “low latency,” as the speed of light is a fundamental limiting factor, and there may be significant latency when accessing cloud infrastructure. With increasingly powerful hardware available at the edge, it enables the processing of data that is physically nearby. Another benefit is the ability to harness specialized hardware that is tailored to their needs, optimizing performance and efficiency while bypassing network latency and bandwidth limitations, as well as configuration constraints imposed by cloud providers. Lastly, edge deployments allow for the centralization of large shared assets within a secure environment, which in turn simplifies storage management and access control, enhancing data security and governance.


OpenTelemetry promises run-time "profiling" as it guns for graduation

This means engineers will be able “to correlate resource exhaustion or poor user experience across their services with not just the specific service or pod being impacted, but the function or line of code most responsible for it.” i.e. They won't just know when something falls down, but why; something commercial offerings can provide but the project has lacked. OpenTelemetry governance committee member, Daniel Gomez Blanco, principal software engineer at Skyscanner, added the advances in profiling raised new challenges, such as how to represent user sessions, and how are they tied into resource attributes, as well as how to propagate context from the client side, to the back end, and back again. As a result it has formed a new specialist interest group to tackle these challenges. Honeycomb.io director of open source Austin Parker, said: “We're right along the glide path in order to continue to grow as a mature project.” As for the graduation process, he said, the security audits will continue over the summer along with work on best practices, audits and remediation. They should complete in the fall: “We'll publish results along these lines, and fixes ,and then we're gonna have a really cool party in Salt Lake City probably.”


Fake data breaches: Countering the damage

Fake data breaches can hurt an organization’s security reputation, even if it quickly debunks the fake breach. Whether real or fake, news of a potential breach can create panic among employees, customers, and other stakeholders. For publicly traded companies, the consequences can be even more damaging as such rumors can degrade a company’s stock value. Fake breaches also have direct financial consequences. Investigating a fake breach consumes time, money, and security personnel. Time spent on such investigations can mean time away from mitigating real and critical security threats, especially for SMBs with limited resources. Some cybercriminals might deliberately create panic and confusion about a fake breach to distract security teams from a different, real attack they might be trying to launch. Fake data breaches can help them gauge the response time and protocols an organization may have in place. These insights can be valuable for future, more severe attacks. In this sense, a fake data breach may well be a “dry run” and an indicator of an upcoming cyber-attack.


CISOs: Make Sure Your Team Members Fit Your Company Culture

Cybersecurity is not a solitary endeavor; it's a collective fight against common adversaries. CISOs can enhance their teams' capabilities by fostering collaboration both within the organization and with external communities. Internally, promoting a security-aware culture across all departments can empower employees to be the first line of defense. Externally, participating in industry forums, sharing threat intelligence with peers and engaging in public-private partnerships can provide access to shared resources, insights and best practices. These collaborations can extend a team's reach and effectiveness beyond its immediate members. Diversifying recruitment efforts can help uncover untapped talent pools. Initiatives aimed at increasing the participation of underrepresented groups in cybersecurity, such as women and veterans, can broaden the range of candidates. CISOs should also look beyond traditional recruitment channels and explore alternative sources such as hackathons, cybersecurity competitions and online communities.


Architecting for High Availability in the Cloud with Cellular Architecture

Cellular architecture is a design pattern that helps achieve high availability in multi-tenant applications. The goal is to design your application so that you can deploy all of its components into an isolated "cell" that is fully self-sufficient. Then, you create many discrete deployments of these "cells" with no dependencies between them. Each cell is a fully operational, autonomous instance of your application ready to serve traffic with no dependencies on or interactions with any other cells. Traffic from your users can be distributed across these cells, and if an outage occurs in one cell, it will only impact the users in that cell while the other cells remain fully operational. ... one of the goals of cellular architecture is to minimize the blast radius of outages, and one of the most likely times that an outage may occur is immediately after a deployment. So, in practice, we’ll want to add a few protections to our deployment process so that if we detect an issue, we can stop deploying the changes until we’ve resolved it. To that end, adding a "staging" cell that we can deploy to first and a "bake" period between deployments to subsequent cells is a good idea.


Swift promotes the concept of a universal shared ledger. But based on messaging

While many of Swift’s points are perfectly valid, in our view, this demonstrates the classic conundrum of how incumbents respond to innovation. Swift could make sense as the operator of some of these shared ledgers. Likewise, incumbent central depositories (CSDs) might be the logical operators for securities ledgers. ... “By leveraging existing components of the financial system that already work well together – including secure financial messaging such as that provided by Swift – the industry can avoid undue levels of market concentration risk, and draw upon tried-and-tested practices to deliver the rich, structured data that it has been working towards for decades.” It continues, “Rather than having each institution record its own individual ‘state’, that function could be abstracted and performed at an industry level, similar to how messaging evolved. Such a state machine could be built on more decentralised blockchain technology, or equally a more centralised platform like Swift’s Transaction Manager could be enhanced for this use.”


The AI Advantage: Mitigating the Security Alert Deluge in a Talent-Scarce Landscape

Security teams are still struggling with an overflow of alerts. The report found that an average of 9,854 false positives arise weekly, wasting valuable time and resources as analysts investigate these non-issues. Moreover, undetected threats present an even more significant concern. The average organization fails to identify a staggering 12,009 threats each week, leaving vulnerabilities exposed. Imagine this: you’re a cybersecurity analyst tasked with safeguarding your organization’s attack surface. But instead of strategically deploying defenses, you’re buried under an avalanche of security alerts. Thousands of alerts bombard your console daily, a relentless barrage threatening to consume your entire workday. This overwhelming volume is the reality for many security analysts. While security tools play a crucial role in detection, they often generate many false positives – harmless activities mistaken for threats. These false alarms are like smoke detectors going off whenever you toast a bagel, forcing you to waste time investigating non-issues. The consequences are dire, as exhausted analysts are more likely to miss genuine threats amidst the noise.


AWS CISO: Pay Attention to How AI Uses Your Data

AI users always need to think about whether they're getting quality responses. The reason for security is for people to trust their computer systems. If you're putting together this complex system that uses a generative AI model to deliver something to the customer, you need the customer to trust that the AI is giving them the right information to act on and that it's protecting their information. ... With strong foundations already in place, AWS was well prepared to step up to the challenge as we've been working with AI for years. We have a large number of internal AI solutions and a number of services we offer directly to our customers, and security has been a major consideration in how we develop these solutions. It's what our customers ask about, and it's what they expect. As one of the largest-scale cloud providers, we have broad visibility into evolving security needs across the globe. The threat intelligence we capture is aggregated and used to develop actionable insights that are used within customer tools and services such as GuardDuty. In addition, our threat intelligence is used to generate automated security actions on behalf of customers to keep their data secure.



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - November 05, 2022

The tight connection between data governance and observability

Although data governance helps in establishing the right set of data management policies and procedures, current data stacks are growing beyond boundaries. With data sets now scaling with more data sources, more tables, and more complexity, there is a pressing need to maintain a constant pulse on the health of these systems. Since any amount of downtime can lead to partial, erroneous, missing, or otherwise inaccurate data, organizations need to do better than just implementing a handful of policies. Data observability enables organizations to cater to these increasingly complex data systems and support an endless ecosystem of data sources and formats. By providing a real-time view of the health and state of data across the enterprise, it empowers them to identify and resolve issues and go far beyond just describing the problem. Observability provides much-needed context to the issue, paving the way for a quick resolution while also ensuring it doesn’t transpire again.


Data Ethics: New Frontiers in Data Governance

Navigating that crucial difference is rarely cut and dried even in simple, day-to-day personal interactions. Still, within the world of data, ethical questions can quickly take on multiple dimensions and present challenges unique to the field. Assessing data ethics can be decidedly confusing, for as Lopez pointed out, “Not all things that are bad for data are actually bad for the world … and vice versa.” Whereas the ethical actions and judgments that we make as private individuals tend to play out within a limited set of factors, the implications of even the most innocuous events within large-scale Data Management can be huge. Company data exists in “space,” potentially flowing between departments and projects, but privacy agreements and other safeguards that apply for some purposes may not apply for others. Data from spreadsheets authored for in-house analytics, for example, might violate a client privacy agreement if it migrates to open cloud storage. 


Distributed ledger technology and the future of insurance

The rise of crypto itself also opens up new and lucrative opportunities for insurers. Not only are we seeing an upward trajectory in consumer adoption of crypto (which jumped worldwide by over 800% in 2021 alone), but there is also significant momentum among institutional investors such as hedge funds and pension funds. This is in part due to recent regulatory and legal clarifications (you can read my reflections on the recent MiCA regulation here), but also the unabated enthusiasm of end investors for this new asset class. Another key accelerator is the growing acceptance of ‘proof of stake’ (in opposition to ‘proof of work’) as the primary consensus mechanism to validate transactions on the blockchain. Critically, proof of stake is far less energy-intensive than its counterpart (by about 99%), and overcomes critical limits on network capacity needed to drive institutional adoption. Ethereum’s transition from proof of work to proof of stake in September of this year was a watershed moment for the industry.As a result, banks are looking to meet institutional demand by launching their own crypto custody solutions. 


Digital transformation: 3 pieces of contrarian advice

The contrarian advice, which is now starting to enter the mainstream, is that it’s time to fully embrace the hybrid cloud. Companies are learning that while public cloud still has many benefits, the cost over time adds up. As an organization grows, there are usually opportunities to do at least some of the workload in the private cloud to gain benefits in locality, data transfer, and flexibility of in-house customizations. Other considerations of the private cloud include various compliance, privacy, and security advantages. The hybrid cloud can offer “best of both worlds” benefits, such as edge computing and more effective paths to advanced technologies such as artificial intelligence and machine learning (AI/ML). The “best of both worlds” effect comes into play when taking advantage of APIs and solutions that are open source and based on open standards. One example of this is running a workload that is simple to run in your private cloud and only has speedup when run on specialized hardware such as a supercomputer, quantum computer, or AI/ML (or other workload-specific) hardware.


EU Cyber Resilience Act

The CRA applies to hardware and software that contain digital components and whose intended use includes a connection to a device or network and applies to all digital products placed on the EU market (including imported products). ... Manufacturers will need to assess the cyber risk of their digital hardware and software and take continued action to fix problems during the lifetime of the product. In addition, before placing any digital product on the market, manufacturers will be required to conduct a formal ‘conformity assessment’ of such product and implement appropriate policies and procedures documenting relevant cybersecurity aspects of the products. Companies will have to notify the EU cybersecurity agency (ENISA) of any exploited vulnerability within the product, and any incident impacting product security, within 24 hours of becoming aware. Manufacturers will also be required to notify users of any incident impacting product security without delay.


How DevOps Helps With Secure Deployments

The goal of DevSecOps is to provide security best practices in a way that doesn’t disrupt team productivity. Secure development is the key to a smooth deployment process. It’s frustrating when you have security in your product but don’t see it being implemented or taken seriously. DevSecOps brings back that focus by reminding everyone just how important and necessary good hygiene practices are for both developers and operations. For security to be more effective and reliable, it should be incorporated from the very beginning. This means that instead of waiting until there is an issue or crisis before implementing measures for protection such as firewalls, encryption keys, etc.; you want your developers working on it up front so they can ensure everything will work well together later. It helps ensure security issues are found as early in the process as possible so it is close to decision-makers. It’s much easier (and less painful) when security issues can be fixed while you still remember what happened with your project—it’s like having an extra set of eyes on the project.


Why enterprise architecture and IT are distinct

The reality is that enterprise architecture too often functions as a specialism within IT. However, one way to draw the distinction is by thinking about the differences between IT and enterprise architecture in terms of information flow. The ability to store and share ideas and information has, and always will be, at the heart of business actions – and it’s something which we now deeply associate with IT. All of the technological infrastructure, however, means nothing without the context in which it operates. Focusing purely on IT forgets the importance of users, as well as the employees and customers who generate, share and utilize the information. Information can’t be useful if it exists in a vacuum – it needs to connect and permeate a business in a dynamic way that’s bespoke to its ecosystem, people and situation. Enterprise architecture’s role is to enable information flow beyond just establishing the infrastructure. In considering and responding to the way in which systems interact with users and business processes, enterprise architecture is aligned to the long-term initiatives and stakeholders, as opposed to just the deployment of technology and tools alone.


How to connect hybrid knowledge workers to corporate culture

Managers must be able to articulate what the company culture is and translate company culture to daily team life, Gartner said. However, the Gartner survey of knowledge workers revealed that less than half of managers can effectively communicate why the broader organisational culture is important. “Teams and managers are the best mechanism for creating culture connectedness by enabling each team to create their own micro-culture while still supporting the organisation,” said Steele. “Organisations can double employee culture connectedness by embracing micro-cultures.” To help connect hybrid knowledge workers to company culture, managers should gauge employees’ understanding of the broader organisational values and their team’s specific norms and processes, she advised. Managers can then work together with their teams to translate what each value means in the context of their work, said Steele, adding that they can then create a list of behaviours that contribute to the culture and those that will derail it.


How Data Privacy Regulations Can Affect Your Business

An obvious impact of data regulations is that they reduce the amount of data a business can collect. Businesses collect and store data to help develop and improve their company, establishing a better understanding of their customer base and target audience. Unfortunately, the risk of storing large quantities of data can pose a significant risk in terms of cybercrime, requiring considerable resources to help protect IT systems. As a result, some businesses are choosing only to collect data that is critical to their operations, limiting the chances of a costly data breach. ... The risk management and compliance of businesses and any third parties involved are very important in the modern business climate. New regulations include many contractual safeguarding procedures, strict data protection, and evidence that compliance has been achieved. ... There have also been new data roles created within businesses in recent years, including those of internal privacy managers, chief data officers (CDOs), privacy executives, data protection officers, and data scientists. 


Accelerating SQL Queries on a Modern Real-Time Database

Modern databases have a cluster architecture spanning multiple nodes for scale, performance, and reliability. High density of fast storage is achieved through solid-state disks (SSDs). Hybrid memory architecture (HMA) stores indexes and data in dynamic random-access memory (DRAM), SSD, and other devices to provide cost-effective fast storage capacity. ... Disk (SSD) reads and writes are optimized for latency and throughput. Indexes play a key role in realizing fast access to data. This requires supporting secondary indexes on integer, string, geospatial, map, and list columns. ... The thread architecture on a cluster node is optimized to exploit parallelism of multicore processors of modern hardware, and also to minimize conflict and maximize throughput. The data is distributed uniformly across all nodes to maximize parallelism and throughput. The client library connects directly to individual cluster nodes and processes a request in a single hop, by distributing the request to nodes where it is processed in parallel, and assembling the results.



Quote for the day:

"You must stand firm if you wish to lead the firm" -- Constance Chuks Friday

Daily Tech Digest - April 09, 2022

Essentials of Enterprise Architecture Tool

EA tools allow organizations to map out their business process architecture, business capability architecture, application architecture, data architecture, integration architecture, and technology architecture. The common capabilities of EA Tool are, EA Repository supports business, information, technology, and solution viewpoints and their relationships and supports business direction, vision, strategy, etc EA Modelling, support the minimum viewpoints of business, information, solutions, and technology. Modeling of As-Is and Target state, Impact Analysis, and Roadmaps Decision Analysis, capabilities such as gap analysis, traceability, impact analysis, scenario planning, and system thinking. Multiple Views support multiple views for different types of audiences/users such as Executives, Architects/Designers, Business Planners, Suppliers, etc. Support customization and extensions of meta-model, diagrams, menus, matrices, and reports Collaboration and Sharing, provide good collaboration-oriented features, which include simultaneous model editing, a shared remote repository, version management including model comparison and merge, easy publishing, and review capabilities


Could Blockchain Be Sustainability’s Missing Link?

Environmental sustainability is only one use case for blockchain technology. Companies can use distributed ledgers for social sustainability and governance. For example, pharmaceutical companies can collect data on a blockchain that identifies and traces prescription drugs. This data collection can prevent consumers from falling prey to counterfeit, stolen, or harmful products. Banks can collateralize physical assets, such as land titles, on a blockchain to keep an unalterable record and protect consumers from fraud. In supply chain finance, organizations can use distributed ledger technology to match the downstream flow of goods with the upstream flow of payments and information. That can help level the playing field for smaller financial institutions. Sustainability must be seamless. ServiceNow recently partnered with Hedera to help organizations easily adopt digital ledger technology on the Now Platform. This partnership provides a seamless connection between trusted workflows across organizations.


Supply chain woes? Analytics may be the answer

Enterprises face multiple risks throughout their supply chains, Deloitte says, including shortened product life cycles and rapidly changing consumer preferences; increasing volatility and availability of resources; heightened regulatory enforcement and noncompliance penalties; and shifting economic landscapes with significant supplier consolidation. ... “Often people think of the supply chain as one thing and it is not,” Korba says. “We think of the supply chain as the sum of several parts of the whole business operation — from understanding customer demand to materials management and manufacturing or sourcing and purchasing, to logistics and transportation, to inventory management and automated replenishment orders at Optimas and at our customers’ locations.” A key to success is the ability for all the supply chain tools the company uses to work together seamlessly, to help keep customers appropriately stocked and better manage costs, demand, inventory, production, and suppliers. The information provided through analytics needs to address financial issues such as cashflow and pricing on the supply and demand sides.


Cloud 2.0: Serverless architecture and the next wave of enterprise offerings

Serverless architecture brings two benefits. First, it enables a pay-as-you-go model on the full stack of technology and on the most granular basis possible, thereby reducing the overall run cost. The pay-as-you-go model is activated by putting functions into production via the operator of the serverless ecosystem only when they are needed. Therefore, serverless architecture not only reduces costs below the economies of scale provided by cloud-based setups capable of operating infrastructure at large scale, but also reduces idle capacity. Second, serverless architecture provides ecosystem access for the underlying infrastructure as well as the entire functionality, thereby drastically reducing the cost to transform the company’s IT environment. Ecosystem access for functions is achieved through the provider’s FaaS and BaaS models instead of being redeveloped for every client. While ecosystem access in SaaS was only possible for the entire software package, with serverless architecture even small-scale functions can be reused, thereby offering more flexibility and reusability on a broad basis.


Meta wants to turn real life into a free-to-play

Companies adopting the free-to-play monetization techniques in their titles naturally have an incentive to max out the users’ shopping sprees. To this end, they can deploy a whole array of design decisions, from annoying pop-ups with links to in-game shops to more sophisticated tools. The latter use behavioral data and psychological tricks to goad the users into spending more. Some of the latest patents coming from leading industry names, such as Activision, put machine learning at the service of the company’s bottom line. Tweaking the matchmaking system to prompt new players to spend more? Check. Clustering players in groups to target them with tailored messaging, offerings, and prices? Check. These and other techniques live and breathe behavioral data. As such, they do raise red flags in terms of data exploitation, especially if you consider who tends to fall for them the hardest. Free-to-play games make a solid chunk of their revenues off a very small subset of their player base, the so-called “whales,” as high-paying players are known in the industry.


Managing Complex Dependencies with Distributed Architecture at eBay

The eBay engineering team recently outlined how they came up with a scalable release system. The release solution leverages distributed architecture to release more than 3,000 dependent libraries in about two hours. The team is using Jenkins to perform the release in combination with Groovy scripts. As we learnt from Randy Shoup (VP of engineering and chief architect at eBay) and Mark Weinberg (VP, core product engineering at eBay) had systemic challenges with releasing major dependencies, leading to the equivalent of distributed monoliths. Late last year, eBay began migrating their legacy libraries to a Mavenized source code. The engineering team needed to consider the complicated dependency relationships between the libraries before the release. The prerequisite of one library release is that all the dependencies of it must have been released already, but considering the large number of candidate libraries and the complicated dependency relationships in each other, it will cause a considerable impact on release performance if the libraries release sequence cannot be orchestrated well.


Mark Zuckerberg’s vision for the metaverse is off to an abysmal start

While Meta’s promotional vision for metaverse worlds is a series of distinct snapshots, other metaverse platforms, such as Decentraland, The Sandbox, and Cryptovoxels, feature some level of urban planning. Like in many real-world cities, they use a grid system with plots of land distributed on a horizontal plane. This allows for property to be easily parceled and sold. However, many of these plots have remained empty, demonstrating that they are primarily traded speculatively. In some instances, content—buildings and things to do, see, and buy within them—has been added to plots of land, in an effort to create value. Virtual property developer the Metaverse Group is leasing Decentraland parcels and offering in-house architectural services to tenants. Its parent company, Tokens.com, has virtual headquarters there too, a blocky sci-fi-style tower in an area called Crypto Valley. ... Real cities are now choosing to emulate themselves in the metaverse. South Korea’s Metaverse 120 Centre will provide both recreational and administrative public services. 


SARB notes benefits, risks in using distributed ledger technology

One of the primary risks stems from the lack of regulatory certainty as the existing legal and regulatory frameworks for financial markets were not designed for trading, clearing or settling on DLT, he added. Innovation should be done in a way that the financial system is taken forward to benefit society as a whole, including contributing to achieving objectives such as improving efficiency, lowering barriers to entry for financial activity and addressing any challenges restricting access to meaningful financial services. ... “PK2 has demonstrated that building a platform for a tokenised security would impact on the existing participants in the financial market ecosystem, as several functions currently being performed by separately licensed market infrastructures could be carried out on a single shared platform. ... Further, the report, produced in partnership with the Intergovernmental Fintech Working Group and financial industry participants, highlights several legal, regulatory and policy implications that need to be carefully considered in the application of DLT to financial markets.


Why There is No Digital Future Without Blockchain

In web3, new storage solutions allow people to store data for each other in a secure and decentralized way. This makes it much, much, more difficult to obtain user data through hacking a server full of data. At the same time, the way data will be managed on the user-side is that it will be completely permission-based. Users will be able to manage data access on the fly, giving and withdrawing permission to personal data when needed. In our vision, this will end up being the way the internet is going to work in the future, whether you apply for a loan or do an online personality test. ... The power of blockchain here lies in the power of digital sovereignty, in other words, the freedom to do whatever you want online without anybody telling you otherwise. Here again, the decentralized nature of blockchain is key, because it makes it virtually impossible for any third party to interfere with the process. ... The idea is that the decentralized nature of blockchain allows people to transact wealth freely, without the need for banks, governments, or anybody else. This once sounded like a futuristic libertarian utopia, now it’s becoming a reality.


How to Measure Agile Maturity

Delivering successful products is essential and goes hand in hand with knowing how good we are at creating the product: our performance. I suggest resisting the urge to measure our performance as a cost. There are many useful metrics available such as speed, quality, predictability, etc that monitor our performance. A word of caution is needed to decide which metrics are valuable and which are not. For example, Velocity is not suitable to compare team performance. Although it can be a valuable metric at a team level, intended for the team to monitor its own speed. However, velocity does not add up to give you a number on your organisational speed. Some suggestions for useful metrics: cycle time, release frequency, product index, innovation rate, etc. ... Measuring how well we perform in delivering value to the customer also serves as a metric for organisational change. How? If it takes multiple sprints and 16 hand-offs to ship an integrated product, we can monitor how we are doing in trying to deliver that integrated product without hand-offs in a single sprint. If the number of handoffs of a team goes down, their ability to deliver Done goes up, which is a metric of organisational improvement.



Quote for the day:

"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis

Daily Tech Digest - December 12, 2021

AWS Among 12 Cloud Services Affected by Flaws in Eltima SDK

USB Over Ethernet enables sharing of multiple USB devices over Ethernet, so that users can connect to devices such as webcams on remote machines anywhere in the world as if the devices were physically plugged into their own computers. The flaws are in the USB Over Ethernet function of the Eltima SDK, not in the cloud services themselves, but because of code-sharing between the server side and the end user apps, they affect both clients – such as laptops and desktops running Amazon WorkSpaces software – and cloud-based machine instances that rely on services such as Amazon Nimble Studio AMI, that run in the Amazon cloud. The flaws allow attackers to escalate privileges so that they can launch a slew of malicious actions, including to kick the knees off the very security products that users depend on for protection. Specifically, the vulnerabilities can be used to “disable security products, overwrite system components, corrupt the operating system or perform malicious operations unimpeded,” SentinelOne senior security researcher Kasif Dekel said in a report published on Tuesday.


Rust in the Linux Kernel: ‘Good Enough’

When we first looked at the idea of Rust in the Linux kernel, it was noted that the objective was not to rewrite the kernel’s 25 million lines of code in Rust, but rather to augment new developments with the more memory-safe language than the standard C normally used in Linux development. Part of the issue with using Rust is that Rust is compiled based on LLVM, as opposed to GCC, and subsequently supports fewer architectures. This is a problem we saw play out when the Python cryptography library replaced some old C code with Rust, leading to a situation where certain architectures would not be supported. Hence, using Rust for drivers would limit the impact of this particular limitation. Ojeda further noted that the Rust for Linux project has been invited to a number of conferences and events this past year, and even garnered some support from Red Hat, which joins Arm, Google, and Microsoft in supporting the effort. According to Ojeda, Red Hat says that “there is interest in using Rust for kernel work that Red Hat is considering.”


DeepMind tests the limits of large AI language systems with 280-billion-parameter model

DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a language’s models size and complexity, meaning that Gopher is larger than OpenAI’s GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia’s Megatron model (530 billion parameters). It’s generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind’s research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix. “I think right now it really looks like the model can fail in variety of ways,” said Rae.


2022 transformations promise better builders, automation, robotics

The Great Resignation is real, and it has affected the logistics industry more than anyone realizes. People don’t want low-paying and difficult jobs when there’s a global marketplace where they can find better work. Automation will be seen as a way to address this, and in 2022, we will see a lot of tech VC investment in automation and robotics. Some say SpaceX and Virgin can deliver cargo via orbit, but I think that’s ridiculous. What we need, (and what I think will be funded in 2022, are more electric and autonomous vehicles like eVTOL, a company that is innovating the “air mobility” market. According to eVTOL’s website, the U.S. Department of Defense has awarded $6 million to the City of Springfield, Ohio, for a National Advanced Air Mobility Center of Excellence. ... In 2022 transformations, grocery will cease to be an in-store retail experience only, and the sector will be as virtual and digitally-driven as the best of them. Things get interesting when we combine locker pickup, virtual grocery, and automated last-mile delivery using autonomous vehicles that can deliver within a mile of the warehouse or store.


Penetration testing explained: How ethical hackers simulate attacks

In a broad sense, a penetration test works in exactly the same way that a real attempt to breach an organization's systems would. The pen testers begin by examining and fingerprinting the hosts, ports, and network services associated with the target organization. They will then research potential vulnerabilities in this attack surface, and that research might suggest further, more detailed probes into the target system. Eventually, they'll attempt to breach their target's perimeter and get access to protected data or gain control of their systems. The details, of course, can vary a lot; there are different types of penetration tests, and we'll discuss the variations in the next section. But it's important to note first that the exact type of test conducted and the scope of the simulated attack needs to be agreed upon in advance between the testers and the target organization. A penetration test that successfully breaches an organization's important systems or data can cause a great deal of resentment or embarrassment among that organization's IT or security leadership


EV charging in underground carparks is hard. Blockchain to the rescue

According to Bharadwaj, the concrete and steel environment effectively acted as a “Faraday cage,” which meant that the EV chargers wouldn’t talk to people’s mobile phones when they tried to initiate charging. You could find yourself stranded, unable to charge your car. “So we had to innovate.” ... As with any EV charging, a payment app connects your car to the EV charger. With Xeal, the use of NFC means the only time you need the Internet is to download the app in the first instance to create a profile that includes their personal and vehicle information and payment details. You then receive a cryptographic token on your mobile phone that authenticates your identity and enables you to access all of Xeal’s public charging stations. The token is time-bound, which means it dissolves after use. To charge your car, you hold your phone up to the charger. Your mobile reads the cryptographic token, automatically bringing up an NFC scanner. It opens the app, authenticates your charging session, starts scanning, and within milliseconds, the charging session starts.


Top 8 AI and ML Trends to Watch in 2022

The scarcity of skilled AI developers or engineers stands as a major barrier to adopting AI technology in many companies. No-code and low-code technologies come to the rescue. These solutions aim to offer simple interfaces, in theory, to develop highly complex AI systems. Today, web design and no-code user interface (UI) tools let users create web pages simply by dragging and dropping graphical elements together. Similarly, no-code AI technology allows developers to create intelligent AI systems by simply merging different ready-made modules and feeding them industrial domain-specific data. Furthermore, NLP, low-code, and no-code technologies will soon enable us to instruct complex machines with our voice or written instructions. These advancements will result in the “democratization” of AI, ML, and data technologies. ... In 2022, with the aid of AI and ML technologies, more businesses will automate multiple yet repetitive processes that involve large volumes of information and data. In the coming years, an increased rate of automation can be seen in various industries using robotic process automation (RPA) and intelligent business process management software (iBPMS). 


The limitations of scaling up AI language models

Large language models like OpenAI’s GPT-3 show an aptitude for generating humanlike text and code, automatically writing emails and articles, composing poetry, and fixing bugs in software. But the dominant approach to developing these models involves leveraging massive computational resources, which has consequences. Beyond the fact that training and deploying large language models can incur high technical costs, the requirements put the models beyond the reach of many organizations and institutions. Scaling also doesn’t resolve the major problem of model bias and toxicity, which often creeps in from the data used to train the models. In a panel during the Conference on Neural Information Processing Systems (NeurIPS) 2021, experts from the field discussed how the research community should adapt as progress in language models continues to be driven by scaled-up algorithms. The panelists explored how to ensure that smaller institutions and can meaningfully research and audit large-scale systems, as well as ways that they can help to ensure that the systems behave as intended.


Here are three ways distributed ledger technology can transform markets

While firms have narrowed their scope to address more targeted pain points, the increased digitalisation of assets is helping to drive interest in the adoption of DLT in new ways. Previous talk of mass disruption of the financial system has given way to more realistic, but still transformative, discussions around how DLT could open doors to a new era of business workflows, enabling transactional exchanges of assets and payments to be recorded, linked, and traced throughout their entire lifecycle. DLT’s true potential rests with its ability to eliminate traditional “data silos”, so that parties no longer need to build separate recording systems, each holding a copy of their version of “the truth”. This inefficiency leads to time delays, increased costs and data quality issues. In addition, the technology can enhance security and resilience, and would give regulators real-time access to ledger transactions to monitor and mitigate risk more effectively. In recent years, we have been pursuing a number of DLT-based opportunities, helping us understand where we believe the technology can deliver maximum value while retaining the highest levels of risk management.


To identity and beyond—One architect's viewpoint

Simple is often better: You can do (almost) anything with technology, but it doesn't mean you should. Especially in the security space, many customers overengineer solutions. I like this video from Google’s Stripe conference to underscore this point. People, process, technology: Design for people to enhance process, not tech first. There are no "perfect" solutions. We need to balance various risk factors and decisions will be different for each business. Too many customers design an approach that their users later avoid. Focus on 'why' first and 'how' later: Be the annoying 7-yr old kid with a million questions. We can't arrive at the right answer if we don't know the right questions to ask. Lots of customers make assumptions on how things need to work instead of defining the business problem. There are always multiple paths that can be taken. Long tail of past best practices: Recognize that best practices are changing at light speed. 



Quote for the day:

"Eventually relationships determine the size and the length of leadership." -- John C. Maxwell