Showing posts with label digital disruption. Show all posts
Showing posts with label digital disruption. Show all posts

Daily Tech Digest - April 15, 2026


Quote for the day:

"Definiteness of purpose is the starting point of all achievement." -- W. Clement Stone


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


How to Choose the Right Cybersecurity Vendor

In his 2026 "No-BS Guide" for enterprise buyers, Deepak Gupta argues that traditional cybersecurity procurement is fundamentally flawed, often falling into the traps of compliance checklists and over-reliance on analyst reports. To navigate a crowded market of over 3,000 vendors, Gupta proposes a framework centered on five critical signals. First, buyers must scrutinize the technical DNA of a vendor’s leadership, ensuring founders possess genuine security expertise rather than just sales backgrounds. Second, evaluations should prioritize architectural depth over superficial feature lists, testing how products handle malicious and unexpected inputs. Third, compliance claims must be verified; instead of accepting simple certificates, buyers should request full SOC 2 reports and contact auditing firms directly. Fourth, customer evidence is paramount. Prospective buyers should interview current users about "worst-day" incident responses and deployment realities to bypass marketing spin. Finally, assessing a vendor's long-term business viability and roadmap alignment prevents future risks of lock-in or product deprioritization. By treating analyst rankings as mere data points and conducting rigorous technical due diligence, security leaders can avoid "vaporware" and select partners capable of defending against modern threats. This approach moves procurement from a simple checkbox exercise toward a strategic assessment of technical resilience and organizational integrity.


Cyber security chiefs split on quantum threat urgency

Cybersecurity leaders are currently divided over the urgency of addressing quantum computing threats, a debate intensified by World Quantum Day and the 2024 release of NIST’s post-quantum cryptography standards. Robin Macfarlane, CEO of RRMac Associates, advocates for immediate action, asserting that quantum technology is already influencing industrial applications and risk analysis at major firms. He warns that traditional encryption methods are nearing obsolescence and urges organizations to proactively audit vulnerabilities and invest in quantum-resilient infrastructure to counter increasingly sophisticated threats. Conversely, Jon Abbott of ThreatAware suggests a more pragmatic approach, arguing that without production-ready quantum computers, the efficacy of modern quantum-proof methods remains speculative. He believes organizations should prioritize more immediate dangers, such as AI-driven malware and ransomware, rather than committing vast resources to quantum migration prematurely. While perspectives vary, both camps agree that establishing a comprehensive inventory of existing encryption is a critical first step. This split highlights a broader strategic dilemma: whether to prepare now for future "harvest now, decrypt later" risks or to focus on the rapidly evolving landscape of contemporary cyberattacks. Ultimately, the decision rests on an organization's specific data-retention needs and its exposure to high-value long-term risks versus today's pressing operational vulnerabilities.


Industry risks competing 6G standards as AI, interoperability lag

As the telecommunications industry progresses toward 6G, the transition into 3GPP Release 20 studies highlights significant risks regarding standard fragmentation and delayed AI interoperability. Unlike its predecessors, 6G aims to embed artificial intelligence deeply into network design, yet the lack of coherent standards for data models and interfaces threatens to stifle seamless multi-vendor integration. Experts warn that unresolved issues concerning air interface protocols and spectrum requirements could lead to the emergence of competing global standards, potentially mirroring the fractured landscape seen during the 3G era. Geopolitical tensions further complicate this process, as the scrutiny of contributions from various nations may hinder a unified technical consensus. Furthermore, 6G must address the shortcomings of 5G, such as architectural rigidity and vendor lock-in, by fostering better alignment between 3GPP and O-RAN frameworks. For nations like India, which is actively shaping global frameworks through the Bharat 6G Mission, successful standardization is vital for ensuring economic scalability and nationwide reach. Ultimately, the industry’s ability to formalize these standards by 2028 will determine whether 6G achieves its promised innovation or remains hindered by interoperability gaps and regional silos, failing to deliver a truly global, autonomous network ecosystem.


The great rebalancing: The give and take of cloud and on-premises data management

"The Great Rebalancing" describes a fundamental shift in enterprise data management as organizations transition from "cloud-first" mandates toward a more strategic, hybrid approach. Driven primarily by the rise of generative AI and private AI initiatives, this trend involves the selective repatriation of workloads from public clouds back to on-premises or colocation environments. High egress fees, escalating storage costs, and the intensive compute requirements of AI models have made public cloud economics increasingly difficult to justify for many large-scale datasets. Beyond financial concerns, the article highlights how organizations are prioritizing data sovereignty, security, and compliance with strict regulations like GDPR and HIPAA, which are often more effectively managed within a private infrastructure. By deploying AI models closer to their primary data sources, companies can significantly reduce latency and eliminate the pricing unpredictability associated with cloud-native architectures. However, this rebalancing is not a total retreat from the cloud. Instead, it represents a move toward a more nuanced infrastructure model where businesses evaluate each workload based on its specific performance and cost requirements. This hybrid future allows enterprises to leverage the scalability of public cloud services while maintaining the control and efficiency of on-premises systems, ultimately creating a more sustainable data management ecosystem.


Building a Security-First Engineering Culture - The Only Defense That Holds When Everything Else Is Tested

In the article "Building a Security-First Engineering Culture," the author argues that a robust cultural foundation is the most critical defense an organization can possess, especially when technical tools and perimeter defenses inevitably face challenges. The core premise revolves around the "shift-left" philosophy, emphasizing that security must be an intrinsic part of the design and development phases rather than an afterthought or a final hurdle in the release cycle. By moving beyond a reactive mindset, engineering teams are encouraged to adopt a proactive stance where security is a shared responsibility, not just the domain of a specialized department. Key strategies discussed include continuous education to empower developers, the integration of automated security checks into CI/CD pipelines, and the implementation of regular threat modeling sessions. Ultimately, the author suggests that a true security-first culture is defined by transparency and a no-blame environment, which facilitates the early identification and resolution of vulnerabilities. This cultural shift ensures that security becomes a core engineering value, creating a resilient ecosystem that remains steadfast even when individual systems or processes are compromised. By fostering this collective accountability, organizations can build sustainable and trustworthy software in an increasingly complex and evolving digital threat landscape.


Too Many Signals: How Curated Authenticity Cuts Through The Noise

In the Forbes article "Too Many Signals: How Curated Authenticity Cuts Through The Noise," Nataly Kelly explores the pitfalls of modern brand communication, where many companies mistakenly equate authenticity with constant, unfiltered sharing. This "oversharing" often results in a muddled brand identity that confuses consumers instead of connecting with them. To address this, Kelly proposes the concept of "curated authenticity," which involves filtering genuine brand expressions through a strategic lens to ensure every signal reinforces a central story. This disciplined approach is increasingly vital in the age of generative AI, which has flooded the market with low-quality "AI slop," making coherence and emotional resonance more valuable than sheer frequency. Kelly advises marketing leaders to align their content with desired perceptions, maintain consistency across all channels, and avoid performative gestures that lack depth. She also stresses the importance of brand tracking, urging CMOs to treat brand health as a critical business metric rather than a soft one. Ultimately, the article argues that by combining human judgment with data-driven insights, brands can cut through digital noise, fostering long-term memories and meaningful engagement rather than just accumulating fleeting likes in a crowded marketplace.


Fixing encryption isn’t enough. Quantum developments put focus on authentication

Recent advancements in quantum computing research have shifted the cybersecurity landscape, compelling organizations to broaden their defensive strategies beyond standard encryption to include robust authentication. New findings from Google and Caltech indicate that the hardware requirements to break elliptic curve cryptography—essential for digital signatures and system access—are significantly lower than previously anticipated, potentially requiring as few as 1,200 logical qubits. This discovery has led major tech players like Google and Cloudflare to move up their "quantum apocalypse" projections to 2029. While many enterprises have focused on protecting stored data from "Harvest Now, Decrypt Later" tactics, experts warn that compromised authentication is far more catastrophic. A quantum-broken credential allows attackers to bypass security perimeters entirely, potentially turning automated software updates into vectors for remote code execution. Although functional, large-scale quantum computers remain in the development phase, the complexity of migrating to post-quantum cryptography (PQC) necessitates immediate action. Organizations are encouraged to form dedicated task forces to inventory vulnerable systems and prioritize the deployment of quantum-resistant authentication protocols. By acknowledging that the timeline for quantum threats is no longer abstract, enterprises can better prepare for a future where traditional cryptographic standards like RSA and elliptic curve cryptography are no longer sufficient to ensure digital sovereignty.


Coordinated vulnerability disclosure is now an EU obligation, but cultural change takes time

In an insightful interview with Help Net Security, Nuno Rodrigues-Carvalho of ENISA explores the evolving landscape of global vulnerability management and the systemic vulnerabilities within the CVE program. Following recent funding uncertainties involving MITRE and CISA, Carvalho emphasizes that the CVE system acts as a critical global backbone, yet its reliance on single institutional points of failure necessitates a more distributed and resilient architecture. Within the European Union, the regulatory environment is shifting significantly through the Cyber Resilience Act (CRA) and the NIS2 Directive, which introduce stringent accountability for vendors. These frameworks mandate that manufacturers report exploited vulnerabilities within specific, narrow timelines through a Single Reporting Platform managed by ENISA. Carvalho highlights that while historical cultural barriers once led organizations to view vulnerability disclosure as a liability, modern standards are normalizing coordinated disclosure as a core component of cybersecurity governance. To bolster this effort, ENISA is expanding European vulnerability services and developing the EU Vulnerability Database (EUVD). This initiative aims to provide machine-readable, context-aware information that complements global standards, ensuring that security practitioners have the necessary tools to navigate conflicting data sources while maintaining interoperability. Ultimately, the goal is a more sustainable, transparent ecosystem that prioritizes collective security over individual corporate reputation.


Most organizations make a mess of handling digital disruption

According to a recent Economist Impact study supported by Telstra International, a staggering 75% of organizations struggle to handle digital disruption effectively. The research highlights that while many businesses possess the intent to remain resilient, there is a significant gap between their ambitions and actual execution. This failure is primarily attributed to weak governance, limited coordination with external partners, and poor visibility beyond immediate organizational boundaries. Only 25% of respondents claimed their disruption responses go as planned, with a mere 21% maintaining dedicated teams for digital resilience. Furthermore, existing risk management frameworks are often too narrow, focusing heavily on cybersecurity while neglecting critical factors like geopolitical shifts, supplier vulnerabilities, and climate-related risks. Legacy technology continues to plague about 60% of firms in the US and UK, further complicating the integration of resilience into modern systems. While financial and IT sectors show more progress in modernizing core infrastructure, the public and industrial sectors significantly lag behind. Ultimately, the report emphasizes that technical strength alone is insufficient. Real digital resilience requires senior-level ownership, comprehensive scenario testing across entire ecosystems, and a cultural shift toward readiness to ensure that human judgment and diverse expertise can effectively navigate the complexities of modern digital crises.


Quantum Computing vs Classical Computing – What’s the Real Difference

The guide explores the fundamental differences between classical and quantum computing, emphasizing how they approach problem-solving through distinct physical principles. Classical computers rely on bits, representing data as either a zero or a one, and process instructions linearly using transistors. In contrast, quantum computers utilize qubits, which leverage the principles of superposition and entanglement to represent and process vast amounts of data simultaneously. This multidimensional approach allows quantum systems to potentially solve specific, complex problems — such as large-scale optimization, molecular simulation for drug discovery, and breaking traditional cryptographic codes — exponentially faster than today’s most powerful supercomputers. However, the guide clarifies that quantum computers are not intended to replace classical systems for everyday tasks. Instead, they serve as specialized tools for high-compute workloads. While classical computing is reaching its physical scaling limits, quantum technology faces its own hurdles, including qubit fragility and the ongoing need for robust error correction. As of 2026, the industry is transitioning from experimental NISQ-era devices toward fault-tolerant systems, marking a pivotal moment where quantum advantage becomes increasingly tangible for commercial applications. This "tug of war" suggests a hybrid future where both architectures coexist to drive global innovation and discovery across various sectors.

Daily Tech Digest - August 13, 2025


Quote for the day:

“You don’t lead by pointing and telling people some place to go. You lead by going to that place and making a case.” -- Ken Kesey


9 things CISOs need know about the dark web

There’s a growing emphasis on scalability and professionalization, with aggressive promotion and recruitment for ransomware-as-a-service (RaaS) operations. This includes lucrative affiliate programs to attract technically skilled partners and tiered access enabling affiliates to pay for premium tools, zero-day exploits or access to pre-compromised networks. It’s fragmenting into specialized communities that include credential marketplaces, exploit exchanges for zero-days, malware kits, and access to compromised systems, and forums for fraud tools. Initial access brokers (IABs) are thriving, selling entry points into corporate environments, which are then monetized by ransomware affiliates or data extortion groups. Ransomware leak sites showcase attackers’ successes, publishing sample files, threats of full data dumps as well as names and stolen data of victim organizations that refuse to pay. ... While DDoS-for-hire services have existed for years, their scale and popularity are growing. “Many offer free trial tiers, with some offering full-scale attacks with no daily limits, dozens of attack types, and even significant 1 Tbps-level output for a few thousand dollars,” Richard Hummel, cybersecurity researcher and threat intelligence director at Netscout, says. The operations are becoming more professional and many platforms mimic legitimate e-commerce sites displaying user reviews, seller ratings, and dispute resolution systems to build trust among illicit actors.


CMMC Compliance: Far More Than Just an IT Issue

For many years, companies working with the US Department of Defense (DoD) treated regulatory mandates including the Cybersecurity Maturity Model Certification (CMMC) as a matter best left to the IT department. The prevailing belief was that installing the right software and patching vulnerabilities would suffice. Yet, reality tells a different story. Increasingly, audits and assessments reveal that when compliance is seen narrowly as an IT responsibility, significant gaps emerge. In today’s business environment, managing controlled unclassified information (CUI) and federal contract information (FCI) is a shared responsibility across various departments – from human resources and manufacturing to legal and finance. ... For CMMC compliance, there needs to be continuous assurance involving regularly monitoring systems, testing controls and adapting security protocols whenever necessary. ... Businesses are having to rethink much of their approach to security because of CMMC requirements. Rather than treating it as something to be handed off to the IT department, organizations must now commit to a comprehensive, company-wide strategy. Integrating thorough physical security, ongoing training, updated internal policies and steps for continuous assurance mean companies can build a resilient framework that meets today’s regulatory demands and prepares them to rise to challenges on the horizon.


Beyond Burnout: Three Ways to Reduce Frustration in the SOC

For years, we’ve heard how cybersecurity leaders need to get “business smart” and better understand business operations. That is mostly happening, but it’s backwards. What we need is for business leaders to learn cybersecurity, and even further, recognize it as essential to their survival. Security cannot be viewed as some cost center tucked away in a corner; it’s the backbone of your entire operation. It’s also part of an organization’s cyber insurance – the internal insurance. Simply put, cybersecurity is the business, and you absolutely cannot sell without it. ... SOCs face a deluge of alerts, threats, and data that no human team can feasibly process without burning out. While many security professionals remain wary of artificial intelligence, thoughtfully embracing AI offers a path toward sustainable security operations. This isn’t about replacing analysts with technology. It’s about empowering them to do the job they actually signed up for. AI can dramatically reduce toil by automating repetitive tasks, provide rapid insights from vast amounts of data, and help educate junior staff. Instead of spending hours manually reviewing documents, analysts can leverage AI to extract key insights in minutes, allowing them to apply their expertise where it matters most. This shift from mundane processing to meaningful analysis can dramatically improve job satisfaction.


7 legal considerations for mitigating risk in AI implementation

AI systems often rely on large volumes of data, including sensitive personal, financial and business information. Compliance with data privacy laws is critical, as regulations such as the European Union’s General Data Protection Regulation, the California Consumer Privacy Act and other emerging state laws impose strict requirements on the collection, processing, storage and sharing of personal data. ... AI systems can inadvertently perpetuate or amplify biases present in training data, leading to unfair or discriminatory outcomes. This risk is present in any sector, from hiring and promotions to customer engagement and product recommendations. ... The legal framework surrounding AI is evolving rapidly. In the U.S., multiple federal agencies, including the Federal Trade Commission and Equal Employment Opportunity Commission, have signaled they will apply existing laws to AI use cases. AI-specific state laws, including in California and Utah, have taken effect in the last year. ... AI projects involve unique intellectual property questions related to data ownership and IP rights in AI-generated works. ... AI systems can introduce new cybersecurity vulnerabilities, including risks related to data integrity, model manipulation and adversarial attacks. Organizations must prioritize cybersecurity to protect AI assets and maintain trust.


Forrester’s Keys To Taming ‘Jekyll and Hyde’ Disruptive Tech

“Disruptive technologies are a double-edged sword for environmental sustainability, offering both crucial enablers and significant challenges,” explained the 15-page report written by Abhijit Sunil, Paul Miller, Craig Le Clair, Renee Taylor-Huot, Michele Pelino, with Amy DeMartine, Danielle Chittem, and Peter Harrison. “On the positive side,” it continued, “technology innovations accelerate energy and resource efficiency, aid in climate adaptation and risk mitigation, monitor crucial sustainability metrics, and even help in environmental conservation.” “However,” it added, “the necessary compute power, volume of waste, types of materials needed, and scale of implementing these technologies can offset their benefits.” ... “To meet sustainability goals with automation and AI,” he told TechNewsWorld, “one of our recommendations is to develop proofs of concept for ‘stewardship agents’ and explore emerging robotics focused on sustainability.” When planning AI operations, Franklin Manchester, a principal global industry advisor at SAS, an analytics and artificial intelligence software company in Cary, N.C., cautioned, “Not every nut needs to be cracked with a sledgehammer.” “Start with good processes — think lean process mapping, for example — and deploy AI where it makes sense to do so,” he told TechNewsWorld.


5 Key Benefits of Data Governance

Data governance processes establish data ethics, a code of behavior providing a trustworthy business climate and compliance with regulatory requirements. The IAPP calculates that 79% of the world’s population is now protected under privacy regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). This statistic highlights the importance of governance frameworks for risk management and customer trust. ... Data governance frameworks recognize data governance roles and responsibilities and streamline processes so that corporate-wide communications can improve. This systematic approach sets up businesses to be more agile, increasing the “freedom to innovate, invest, or hunker down and focus internally,” says O’Neal. For example, Freddie Mac developed a solid data strategy that streamlined data governance communications and later had the level of buy-in for the next iteration. ... With a complete picture of business activities, challenges, and opportunities, data governance creates the flexibility to respond quickly to changing needs. This allows for better self-service business intelligence, where business users can gather multi-structured data from various sources and convert it into actionable intelligence.


Architecture Lessons from Two Digital Transformations

The prevailing mindset was that of “Don’t touch what isn’t broken”. This approach, though seemingly practical, reflected a deeper inertia, rooted in a cash-strapped culture and leadership priorities that often leaned towards prestige over progress. Over the years, the organization had acquired others in an attempt to grow its customer base. These mergers and acquisitions lead to inheritance of a lot more legacy estate. The mess burgeoned to an extent that they needed a transformation, not now, but yesterday! That is exactly where the Enterprise Architecture practice comes into picture. Strategically, a green field approach was suggested. A brand-new system from scratch, that has modern data centers for the infrastructure, cloud platforms for the applications, plug and play architecture or composable architecture as it is better known, for technology, unified yet diversified multi-branding under one umbrella and the whole works. Where things slowly started taking a downhill turn is when they decided to “outsource” the entire development of this new and shiny platform to a vendor. The reasoning was that the organization did not want to diversify from being a banking institution and turn into an IT heavy organization. They sought experienced engineering teams who could hit the ground running and deliver in 2 years flat.


Cloud security in multi-tenant environments

The most useful security strategy in a multi-tenant cloud environment comes from cultivating a security-first culture. It is important to educate the team on the intricacies of the cloud security system, implementing stringent password and authentication policies, thereby promoting secure practices for development. Security teams and company executives may reduce the possible effects of breaches and remain ready for changing threats with the support of event simulations, tabletop exercises, and regular training. ... As we navigate the evolving landscape of enterprise cloud computing, multi-tenant environments will undoubtedly remain a cornerstone of modern IT infrastructure. However, the path forward demands more than just technological adaptation – it requires a fundamental shift in how we approach security in shared spaces. Organizations must embrace a comprehensive defense-in-depth strategy that transcends traditional boundaries, encompassing everything from robust infrastructure hardening to sophisticated application security and meticulous user governance. The future of cloud computing need not present a binary choice between efficiency and security. ... By placing security at the heart of multi-tenant operations, organizations can fully harness the transformative power of cloud technology while protecting their most critical assets 


This Big Data Lesson Applies to AI

Bill Schmarzo was one of the most vocal supporters of the idea that there were no silver bullets, and that successful business transformation was the result of careful planning and a lot of hard work. A decade ago, the “Dean of Big Data” let this publication in on secret recipe he would use to guide his clients. He called it the SAM test, and it allowed business leaders to gauge the viability of new IT projects through three lenses.First, is the new project strategic? That is, will it make a big difference for the company? If it won’t, why are you investing lots of money? Second, is the proposed project actionable? You might be able to get some insight with the new tech, but can your business actually do anything with it? Third, is the project material? The new project might technically be feasible, but if the costs outweigh the benefits, then it’s a failure. Schmarzo, who is currently working as Dell’s Customer AI and Data Innovation Strategist, was also a big proponent of the importance of data governance and data management. The same data governance and data management bugaboos that doomed so many big data projects are, not surprisingly, raising their ugly little heads in the age of AI. Which brings us to the current AI hype wave. We’re told that trillions of dollars are on the line with large language models, that we’re on the cusp of a technological transformation the likes of which we have never seen. 


Sovereign cloud and digital public infrastructure: Building India’s AI backbone

India’s Digital Public Infrastructure (DPI) is an open, interoperable platform that powers essential services like identity and payments. It comprises foundational systems that are accessible, secure, and support seamless integration. In practice, this has taken shape as the famous “India Stack.” ... India’s digital economy is on an exciting trajectory. A large slice of that will be AI-driven services like smart agriculture, precision health, financial inclusion, and more. But to fully capitalize on this opportunity, we need both rich data and trusted compute. DPI provides vast amounts of structured data (financial records, IDs, health info) and access channels. Combining that with a sovereign cloud means we can turn data into insight on Indian soil. Indian regulators now view data itself as a strategic asset and fuel for AI. AI pilots (e.g., local-language advisory bots) are already being built on top of DPI platforms (UPI, ONDC, etc.) to deliver inclusive services. And the government has even subsidized thousands of GPUs for researchers. But all this computing and data must be hosted securely. If our AI models and sensitive datasets live on foreign soil, we remain vulnerable to geopolitical shifts and export controls. ... Now, policy is catching up with sovereignty. In 2023, the new Digital Personal Data Protection (DPDP) Act formally mandated local storage for sensitive personal data. 

Daily Tech Digest - December 13, 2024

The fintech revolution: How digital disruption is reshaping the future of banking

Several pivotal trends have converged to accelerate fintech adoption. The JAM trinity—Jan Dhan, Aadhaar, and Mobile—became the cornerstone of India’s fintech revolution, enabling seamless, paperless onboarding and verification for financial services. Aadhaar-enabled biometric authentication, for instance, has transformed how identity verification is conducted, making the process entirely mobile-based. Perhaps the Unified Payments Interface (UPI) is the most profound disruptor. Introduced by the Indian government as part of its push for a cashless economy, UPI has redefined peer-to-peer (P2P) and person-to-merchant (P2M) transactions. As of September 2024, UPI transactions have reached a staggering 15 billion per month, with transaction values surpassing INR 20.6 trillion, marking a 16x increase in volume and a 13x increase in value over five years. UPI’s convenience and speed have made it the default payment mode for millions, further marginalising the role of traditional banking infrastructure. At the same time, blockchain technology is emerging as a force that could dramatically reduce bank operational costs. Decentralised, secure, and transparent, blockchain allows financial institutions to overhaul their legacy systems. 


Bridging the AI Skills Gap: Top Strategies for IT Teams in 2025

Daly explained that practical applications are key to learning, and creating cross-functional teams that include AI experts can facilitate knowledge sharing and the practical application of new skills. "To prepare for 2025 and beyond, it's crucial to integrate AI and ML into the core business strategy beyond R&D investment or technical roles, but also into broader organizational talent development," she said. "This ensures all employees understand the opportunity [and] potential impact, and are trained on responsible use." ... Kayne McGladrey, IEEE senior member and field CISO at Hyperproof, said AI ethics skills are important because they ensure that AI systems are developed and used responsibly, aligning with ethical standards and societal values. "These skills help in identifying and mitigating biases, ensuring transparency, and maintaining accountability in AI operations," he explained. ... Scott Wheeler, cloud practice lead at Asperitas, said building a culture of innovation and continual learning is the first step in closing a skills gap, particularly for newer technologies like AI. "Provide access to learning resources, such as on-demand platforms like Coursera, Udemy, Wizlabs," he suggested. "Embed learning into IT projects by allocating time in the project schedule and monitor and adjust the various programs based on what works or doesn't work for your organization."


What Makes the Ideal Platform Engineer?

Platform engineers decide on a platform — consisting of many different tools, workflows and capabilities — that DevOps, developers and others in the business can use to develop and monitor the development of software. They base these decisions on what will work best for these users. ... The old adage that every business is unique applies here; platform engineering doesn’t look the same in every organization, nor do the platforms or portals that are used. But there are some key responsibilities that platform engineers will often have and skills that they require. Noam Brendel is a DevOps team lead at Checkmarx, an application security firm that has embraced platform engineering. He believes a platform engineer’s focus should be on improving developer excellence. “The perfect platform engineer helps developers by building systems that eliminate bottlenecks and increase collaboration,” he said. ... “Platform engineers need to have a strong understanding of how everything is connected and how the platform is built behind the scenes,” explained Zohar Einy, CEO of Port, a provider of open internal developer portals. He emphasized the importance of knowing how the company’s technical stack is structured and which development tools are used.


Biometrics and AI Knock Out Passwords in the Security Battle

Biometrics and AI-powered authentication have moved beyond concept to successful application. For instance, HSBC's Voice ID voice identification technology analyzes over 100 characteristics of an individual's voice, maintains a sample of the customer's voice, and compares it to the caller's voice. ... The success of implementing biometrics and AI into existing systems relies on organizations to follow best practices. Organizational leaders can assess organizational needs by conducting a security audit to identify vulnerabilities that biometrics and AI can address. This information is then used to create a roadmap for implementation considering budget, resources, and timelines. Involving appropriate staff in such discussions is essential so all stakeholders understand the factors considered in decision-making. Selecting the right technology calls for careful vendor evaluation and identification of solutions that align with the organization's requirements and compliance obligations. Once these decisions are solidified, it is prudent to use pilot programs to start the integration. Small-scale deployments test effectiveness and address any unforeseen issues before large-scale implementation.


CISA, Five Eyes issue hardening guidance for communications infrastructure

The joint guidance is in direct response to the breach of telecommunications infrastructure carried out by the Chinese government-linked hacking collective known as Salt Typhoon. ... “Although tailored to network defenders and engineers of communications infrastructure, this guide may also apply to organizations with on-premises enterprise equipment,” the guidance states. “The authoring agencies encourage telecommunications and other critical infrastructure organizations to apply the best practices in this guide.” “As of this release date,” the guidance says, “identified exploitations or compromises associated with these threat actors’ activity align with existing weaknesses associated with victim infrastructure; no novel activity has been observed. Patching vulnerable devices and services, as well as generally securing environments, will reduce opportunities for intrusion and mitigate the actors’ activity.” Visibility, a cornerstone of network defenses to monitoring, detecting, and understanding activities within their infrastructure, is pivotal in identifying potential threats, vulnerabilities, and anomalous behaviors before they escalate into significant security incidents.


Tackling software vulnerabilities with smarter developer strategies

No two developers solve a problem or build a software product the same way. Some arrive at their career through formal college education, while others are self-taught and with minimal mentorship. Styles and experiences vary wildly. Equally so, we should expect they will consider secure coding practices and guidelines with similar diversity of thought. Organizations must account for this wide diversity in its secure development practices – training, guidelines, standards. These may be foreign concepts to even a highly proficient developer, and we need to give our developers the time and space to learn and ask questions, with sufficient time to develop a secure coding proficiency. ... Best in class organizations have established ‘security champions’ programs where high-skilled developers are empowered to be a team-level resource for secure coding knowledge and best practice in order for institutional knowledge to spread. This is particularly important in remote environments where security teams may be unfamiliar or untrusted faces, and the internal development team leaders are all that much more important to set the tone and direction for adopting a security mindset and applying security principles.


Developing an AI platform for enhanced manufacturing efficiency

To power our AI Platform, we opted for a hybrid architecture that combines our on-premises infrastructure and cloud computing. The first objective was to promote agile development. The hybrid cloud environment, coupled with a microservices-based architecture and agile development methodologies, allowed us to rapidly iterate and deploy new features while maintaining robust security. The path for a microservices architecture arose from the need to flexibly respond to changes in services and libraries, and as part of this shift, our team also adopted a development method called "SCRUM" where we release features incrementally in short cycles of a few weeks, ultimately resulting in streamlined workflows.  ... The second objective is to use resources effectively. The manufacturing floor, where AI models are created, is now also facing strict cost efficiency requirements. With a hybrid cloud approach, we can use on-premises resources during normal operations and scale to the cloud during peak demand, thus reducing GPU usage costs and optimizing performance. This allows us to flexibly adapt to an expected increase in the number of users of AI Platform in the future, as well.


Privacy is a human right, and blockchain is critical to securing It

While blockchain offers decentralized and secure transactions, the lack of privacy on public blockchains can expose users to risks, from theft to persecution. In October, details emerged of one of the largest in-person crypto thefts in US history after a DC man was targeted when kidnappers were able to identify him as an early crypto investor. However, despite the case for on-chain privacy, it’s proven difficult to advance any real-world implementations. Along with the regulatory challenges faced by segments such as privacy coins and mixers, certain high-profile missteps have done little to advance the case for on-chain privacy. Worldcoin, Sam Altman’s much-touted crypto identity project that collected biometric data from users, has also failed to live up to exceptions due to, perversely, concerns from regulators about breaches of users’ data privacy. In August, the government of Kenya suspended Worldcoin’s operations following concerns about data security and consent practices. In October, the company announced it was pivoting away from the EU and towards Asian and Latin American markets, following regulatory wrangling over the European GDPR rules.


Transforming fragmented legacy controls at large banks

You’re not just talking about replacing certain components of a process with technology. There’s also a cost to this change. It’s not always on the top of the list when budgets come around. Usually, spend goes on areas that are revenue generating or more in the innovation space. It can be somewhat of a hard sell to the higher-ups as to why they would spend money to change something, and a lot of organisations aren’t great at articulating the business case for it. ... If you take operational resilience perspective, for example, that’s about being able to get your arms around your important business services, using regulatory language. Considering what is supporting them? What does it take to maintain them, keep them resilient and available, and recover them? The reality is that this used to be infinitely more straightforward. Most of the systems may have been in your own data centre in your own building. Now, the ecosystems that support most of these services are much more complex. You’ve obviously got cloud providers, SaaS providers, and third parties that you’ve outsourced to. You’ve also got a huge number of different services that, even if you’ve bought them and they’re in-house, there are a myriad of internal teams to navigate.


Why the Growing Adoption of IoT Demands Seamless Integration of IT and OT

Effective cybersecurity in OT environments requires a mix of skills and knowledge from both IT and OT teams. This includes professionals from IT infrastructure and cybersecurity, as well as control system engineers, field operations staff, and asset managers typically found in OT. ... The integration of IT and OT through advanced IoT protocols represents a major step forward in securing industrial and healthcare systems. However, this integration introduces significant challenges. I propose a new approach to IoT security that incorporates protocol-agnostic application layer security, lightweight cryptographic algorithms, dynamic key management, and end-to-end encryption, all based on zero-trust network architecture (ZTNA). ... In OT environments, remediation steps must go beyond traditional IT responses. While many IT security measures reset communication links and wipe volatile memory to prevent further compromise, additional processes are needed for identifying, classifying, and investigating cyber threats in OT systems. Furthermore, organizations can benefit from creating unified governance structures and cross-training programs that align the priorities of IT and OT teams. 



Quote for the day:

"There are three secrets to managing. The first secret is have patience. The second is be patient. And the third most important secret is patience." -- Chuck Tanner

Daily Tech Digest - August 26, 2024

The definitive guide to data pipelines

A key data pipeline capability is to track data lineage, including methodologies and tools that expose data’s life cycle and help answer questions about who, when, where, why, and how data changes. Data pipelines transform data, which is part of the data lineage’s scope, and tracking data changes is crucial in regulated industries or when human safety is a consideration. ... Other data catalog, data governance, and AI governance platforms may also have data lineage capabilities. “Business and technical stakeholders must equally understand how data flows, transforms, and is used across sources with end-to-end lineage for deeper impact analysis, improved regulatory compliance, and more trusted analytics,” says Felix Van de Maele, CEO of Collibra. The data ops behind data pipelines When you deploy pipelines, how do you know whether they receive, transform, and send data accurately? Are data errors captured, and do single-record data issues halt the pipeline? Are the pipelines performing consistently, especially under heavy load? Are transformations idempotent, or are they streaming duplicate records when data sources have transmission errors?


Living with trust issues: The human side of zero trust architecture

As we’ve become more dependent on technology, IT environments have become more complex. This has made threats more intense and could even pose a serious danger. To tackle these growing security challenges — which needed a stronger and more flexible approach — industry experts, security practitioners, and tech providers came together to develop the zero trust architecture (ZTA) framework. This development led to a growing recognition of the importance of prioritizing verification over trust, which made ZTA a cornerstone of modern cybersecurity strategies. The main idea behind ZTA is to “never trust, always verify.” ... Implementing the ZTA framework means that every action the IT and security teams handle is filtered through a security-first lens. However, the over-repeated mantra of “never trust, always verify” may affect the psychological well-being of those implementing it. Imagine spending hours monitoring every network activity while constantly questioning if the information is genuine and if people’s motives are pure. This suspicious climate not only affects the work environment but also spills over into personal interactions, affecting trust with others. 


Top technologies that will disrupt business in 2025

Chaplin finds ML useful for identifying customer-related trends and predicting outcomes. That sort of forecasting can help allocate resources more effectively, he says, and engage customers better — for example when recommending products. “While gen AI undoubtedly has its allure, it’s important for business leaders to appreciate the broader and more versatile applications of traditional ML,” he says. ... What Skillington touches on is the often-overlooked facet of any successful digital transformation: It all starts with data. By breaking down data silos, establishing wholistic data governance strategies, developing the right data architecture for the business, and developing data literacy across disciplines, organizations can not only gain better access to their data but also better understand how ... Edge computing and 5G are two complementary technologies that are maturing, getting smaller, and delivering tangible business results securely, says Rogers Jeffrey Leo John, CTO and co-founder of DataChat. “Edge devices such as mobile phones can now run intensive tasks like AI and ML, which were once only possible in data centers,” he says. 


Meta presents Transfusion: A Recipe for Training a Multi-Modal Model Over Discrete and Continuous Data

Transfusion is trained on a balanced mixture of text and image data, with each modality being processed through its specific objective: next-token prediction for text and diffusion for images. The model’s architecture consists of a transformer with modality-specific components, where text is tokenized into discrete sequences and images are encoded as latent patches using a variational autoencoder (VAE). The model employs causal attention for text tokens and bidirectional attention for image patches, ensuring that both modalities are processed effectively. Training is conducted on a large-scale dataset consisting of 2 trillion tokens, including 1 trillion text tokens and 692 million images, each represented by a sequence of patch vectors. The use of U-Net down and up blocks for image encoding and decoding further enhances the model’s efficiency, particularly when compressing images into patches. Transfusion demonstrates superior performance across several benchmarks, particularly in tasks involving text-to-image and image-to-text generation. 


AI Assistants: Picking the Right Copilot

The best assistant operates as an agent that understands what context the underlying AI can assume from its known environment. IDE assistants such as GitHub Copilot know that they are responding with programming projects in mind. GitHub Copilot examines script comments as well as syntax in a given script before crafting a suggestion. The tool examines syntax and comments against its trained datasets, consisting of GPT training and the codebase of GitHub's public repositories. GitHub Copilot was trained on the public repositories in GitHub, so it has a slightly different "perspective" on syntax than that of ChatGPT ADA. Thus, the choice of corpus for an AI model can influence what answer an AI assistant yields to users. A good AI assistant should offer a responsive chat feature to indicate its understanding of its environment. Jupyter, Tabnine, and Copilot all offer a native chat UI for the user. The chat experience influences how well a professional feels the AI assistant is working. How well it interprets prompts and how accurate the suggestions are all start with the conversational assistant experience, so technical professionals should note their experiences to see which assistant works best for their projects.


Is the vulnerability disclosure process glitched? How CISOs are being left in the dark

The elephant in the room regarding misaligned motives and communications between researchers and software vendors is that vendors frequently try to hide or downplay the bugs that researchers feel obligated to make public. “The root cause is a deep-seated fear and prioritizing reputation over security of users and customers,” Rapid7’s Condon says. “What it comes down to many times is that organizations are afraid to publish vulnerability information because of what it might mean for them legally, reputationally, and financially if their customers leave. Without a concerted effort to normalize vulnerability disclosure to reward and incentivize well-coordinated vulnerability disclosure, we can pick at communication all we want. Still, the root cause is this fear and the conflict that it engenders between researchers and vendors.” Condon is, however, sympathetic to the vendors’ fears. “They don’t want any information out there because they are understandably concerned about reputational damage. They’re seeing major cyberattacks in the news, CISOs and CEOs dragged in front of Congress or the Senate here in the US, and lawsuits are coming out against them. ...”


Level Up Your Software Quality With Static Code Analysis

Behind high-quality software is high-quality code. The same core coding principles remain true regardless of how the code was written, either by humans or AI coding assistants. Code must be easy to read, maintain, understand and change. Code structure and consistency should be robust and secure to ensure the application performs well. Code devoid of issues helps you attain the most value from your software. ... While static analysis focuses on code quality and reduces the number of problems to be found later in the testing stage, application testing ensures that your software actually runs as it was designed. By incorporating both automated testing and static analysis, developers can manage code quality through every stage of the development process, quickly find and fix issues and improve the overall reliability of their software. A combination of both is vital to software development. In fact, a good static analysis tool can even be integrated into your testing tools to track and report the percentage of code covered by your unit tests. Sonar recommends a test code coverage of 80% or your code will fail to pass the recommended standard.


Two strategies to protect your business from the next large-scale tech failure

The key to mitigating another large-scale system failure is to plan for catastrophic events and practice your response. Make dealing with failure part of normal business practices. When failure is unexpected and rare, the processes to deal with it are untested and may even result in actions which make the failure worse. Build a network and a team that can adapt and react to failures. Remember when insurance companies ran their own data centres and disaster recovery tests were conducted twice a year? ... The second strategy for minimizing large-scale failures is to avoid the software monoculture created by the concentration of digital tech suppliers. It’s more complex but worth it. Some corporations have a policy of buying their core networking equipment from three or four different vendors. Yes, it makes day-to-day management a little more difficult, but they have the assurance that if one vendor has a failure, their entire network is not toast. Whether it’s tech or biology, a monoculture is extremely vulnerable to epidemics which can destroy the entire system. In the CrowdStrike scenario, if corporate networks had been a mix of Windows, Linux and other operating systems, the damage would not have been as widespread.


India's Critical Infrastructure Suffers Spike in Cyberattacks

The adoption of emerging technologies such as AI and cloud and the focus on innovation and remote working has driven digital transformations, thus boosting companies' need for more security defenses, according to Manu Dwivedi, partner and leader for cybersecurity at consultancy PwC India. "AI-enabled phishing and aggressive social engineering have elevated ransomware to the top concern," he says. "While cloud-related threats are concerning, greater interconnectivity between IT and OT environments and increased usage of open-source components in software are increasing the available threat surface for attackers to exploit." Indian organizations also need to harden their systems against insider threats, which requires a combination of business strategy, culture, training, and governance processes, Dwivedi says. ... The growing demand for AI has also shaped the threat landscape in the country and threat actors have already started experimenting with different AI models and techniques, says PwC India's Dwivedi. "Threat actors are expected to use AI to generate customized and polymorphic malware based on system exploits, which escapes detection from signature-based and traditional detection methods," he says.


Architectural Patterns for Enterprise Generative AI Apps

In the RAG pattern, we integrate a vector database that can store and index embeddings (numerical representations of digital content). We use various search algorithms like HNSW or IVF to retrieve the top k results, which are then used as the input context. The search is performed by converting the user's query into embeddings. The top k results are added to a well-constructed prompt, which guides the LLM on what to generate and the steps it should follow, as well as what context or data it should consider. ... GraphRAG is an advanced RAG approach that uses a graph database to retrieve information for specific tasks. Unlike traditional relational databases that store structured data in tables with rows and columns, graph databases use nodes, edges, and properties to represent and store data. This method provides a more intuitive and efficient way to model, view, and query complex systems. ... Like the basic RAG system, GraphRAG also uses a specialized database to store the knowledge data it generates with the help of an LLM. However, generating the knowledge graph is more costly compared to generating embeddings and storing them in a vector database. 



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - May 09, 2024

Red Hat delivers accessible, open source Generative AI innovation with Red Hat Enterprise Linux AI

RHEL AI builds on this open approach to AI innovation, incorporating an enterprise-ready version of the InstructLab project and the Granite language and code models along with the world’s leading enterprise Linux platform to simplify deployment across a hybrid infrastructure environment. This creates a foundation model platform for bringing open source-licensed GenAI models into the enterprise. RHEL AI includes:Open source-licensed Granite language and code models that are supported and indemnified by Red Hat. A supported, lifecycled distribution of InstructLab that provides a scalable, cost-effective solution for enhancing LLM capabilities and making knowledge and skills contributions accessible to a much wider range of users. Optimised bootable model runtime instances with Granite models and InstructLab tooling packages as bootable RHEL images via RHEL image mode, including optimised Pytorch runtime libraries and accelerators for AMD Instinct™ MI300X, Intel and NVIDIA GPUs and NeMo frameworks. 


Regulators are coming for IoT device security

Up to now, the IoT industry has relied mainly on security by obscurity and the results have been predictable: one embarrassing compromise after another. IoT devices find themselves recruited into botnets, connected locks get trivially unlocked, and cars can get remotely shut down while barreling down the highway at 70mph. Even Apple, who may have the most sophisticated hardware security team on the planet, has faced some truly terrible security vulnerabilities. Regulators have taken note, and they are taking action. In September 2022, NIST fired a warning shot by publishing a technical report that surveyed the state of IoT security and made a series of recommendations. This was followed by a voluntary regulatory scheme—the Cyber Trust Mark, published by the FCC in the US—as well as a draft regulation of European Union’s upcoming Cyber Resilience Act (CRA). Set to begin rolling out in 2025, the CRA will create new cybersecurity requirements to sell a device in the single market. Standard bodies have not stayed idle.The Connectivity Standards Alliance published the IoT Device Security Specification in March of this year, after more than a year of work by its Product Security Working Group.


Australia revolutionises data management challenges

In Australia, the importance of data literacy is growing rapidly. It is now more essential than ever to be able to comprehend and effectively communicate data as valuable information. The significance of data literacy cannot be overemphasised. Highlighting the importance of data literacy across government agencies is key to unlocking the true power of data. Understanding which data to use for problem-solving, employing critical thinking to comprehend and tackle data strengths and limitations, strategically utilising data to shape policies and implement effective programmes, regulations, and services, and leveraging data to craft a captivating narrative are all essential components of this process. Nevertheless, the ongoing challenge lies in ensuring that employees have the ability to interpret and utilise data effectively. Individuals who are inexperienced with data may find it challenging to effectively work with data, comprehend intricate datasets, analyse patterns, and extract valuable insights. Organisations are placing a strong emphasis on data literacy initiatives, aiming to turn individuals with limited data knowledge into experts in the field. 


Navigating Architectural Change: Overcoming Drift and Erosion in Software Systems

Architectural drift involves the introduction of design decisions that were not part of the original architectural plan, yet these decisions do not necessarily contravene the foundational architecture. In contrast, architectural erosion occurs when new design considerations are introduced that directly conflict with or undermine the system's intended architecture, effectively violating its guiding principles. ... In software engineering terms, a system may start with a clean architecture but, due to architectural drift, evolve into a complex tangle of multiple architectural paradigms, inconsistent coding practices, redundant components, and dependencies. On the other hand, architectural erosion could be likened to making alterations or additions that compromise the structural integrity of the house. For instance, deciding to remove or alter key structural elements, such as knocking down a load-bearing wall to create an open-plan layout without proper support, or adding an extra floor without considering the load-bearing capacity of the original walls.


Strong CIO-CISO relations fuel success at Ally

We identify the value we are creating and capturing before we kick off a technology project, and it’s a joint conversation with the business. I don’t think it’s just the business responsibility to say my customer acquisition is going to go up, or my revenue is going to go up by X. There is a technology component to it, which is extremely critical, especially as a full-scale digital-only organization. What does it take for you to build the capability? How long will it take? How much does it cost and what does it cost to run it? ... Building a strong leadership team is critical. Empowering them is even more critical. When people talk about empowerment, they think it means I leave my leaders alone and they go do whatever they want. It’s actually the opposite. We have sensitive and conflict-filled conversations, and the intent of that is to make each other better. If I don’t understand how my leaders are executing, I won’t be able to connect the dots. It is not questioning what they’re doing; it’s asking questions for my learning so I can connect and share learnings from what other leaders are doing. That’s what leads us to preserving that culture.


To defend against disruption, build a thriving workforce

To build a thriving workplace, leaders must reimagine work, the workplace, and the worker. That means shifting away from viewing employees as cogs who hit their deliverables then turn back into real human beings after the day is done. Employees are now more like elite artists or athletes who are inspired to produce at the highest levels but need adequate time to recharge and recover. The outcome is exceptional; the path to getting there is unique. ... Thriving is more than being happy at work or the opposite of being burned out. Rather, one of the cornerstones of thriving is the idea of positive functioning: a holistic way of being, in which people find a purposeful equilibrium between their physical, mental, social, and spiritual health. Thriving is a state that applies across talent categories, from educators and healthcare specialists to data engineers and retail associates. ... In this workplace, people at every level are capable of being potential thought leaders who have influence through the right training, support, and guidance. They don’t have to be just “doers” who simply implement what others tell them to. 


Tips for Controlling the Costs of Security Tools

The total amount that a business spends on security tools can vary widely depending on factors like which types of tools it deploys, the number of users or systems the tools support and the pricing plans of tool vendors. But on the whole, it’s fair to say that tool expenditures are a significant component of most business budgets. Moody’s found, for example, that companies devote about 8% of their total budget to security. That figure includes personnel costs as well as tool costs, but it provides a sense of just how high security spending tends to be relative to overall business expenses. These costs are likely only to grow. IDC believes that total security budgets will increase by more than a third over the next few years, due in part to rising tool costs. This means that finding ways to rein in spending on security tools is important not just for reducing overall costs today, but also preventing cost overruns in the future. Of course, reducing spending can’t amount simply to abandoning critical tools or turning off important features.


UK Regulator Tells Platforms to 'Tame Toxic Algorithms'

The Office of Communications, better known as Ofcom, on Wednesday urged online intermediaries, which include end-to-end encrypted platforms such as WhatsApp, to "tame toxic algorithms." Ensuring recommender systems "do not operate to harm children" is a measure the regulator made in a draft proposal for regulations enacting the Online Safety Act, legislation the Conservative government approved in 2023 that is intended to limit children's exposure to damaging online content. The law empowers the regulator to order online intermediaries to identify and restrict pornographic or self-harm content. It also imposes criminal prosecution for those whose send harmful or threatening communications. Instagram, YouTube, Google and Facebook that are among 100,000 web services that come under the scope of the regulation and are likely to be affected by the new requirements. "Any service which operates a recommender system and is at higher risk of harmful content should identify who their child users are and configure their algorithms to filter out the most harmful content from children's feeds and reduce the visibility of other harmful content," Ofcom said.


Businesses lack AI strategy despite employee interest — Microsoft survey

“While leaders agree using AI is a business imperative, and many say they won’t even hire someone without AI skills, they also believe that their companies lack a vision and plan to implement AI broadly; they’re stuck in AI inertia,” Colette Stallbaumer, general manager of Copilot and Cofounder of Work Lab at Microsoft, said in a pre-recorded briefing. “We’ve come to the hard part of any tech disruption, moving from experimentation to business transformation,” Stallbaumer said. While there’s clear interest in AI’s potential, many businesses are proceeding with caution with major deployments, say analysts. “Most organizations are interested in testing and deployment, but they are unsure where and how to get the most return,” said Carolina Milanesi, president and principal analyst at Creative Strategies. Security is among the biggest concerns, said Milanesi, “and until that is figured out, it is easier for organizations to shut access down.” As companies start to deploy AI, IT teams face significant demands, said Josh Bersin, founder and CEO of The Josh Bersin Company. 


Mayorkas, Easterly at RSAC Talk AI, Security, and Digital Defense

While acknowledging the increasingly ubiquitous use of AI in many services across the nation, Mayorkas commented about the advisory board’s conversation of leveraging that technology in cybersecurity. “It’s a very interesting discussion on what the definition of ‘safe’ is,” he said. “For example, most people now when they speak of the civil rights, civil liberties implications, categorize that under the responsible use of AI, but what we heard yesterday was an articulation of the fact that the civil liberties, civil rights implications of AI really are part and parcel of safety.” ... Technologies are shipped in ways that create risk, vulnerabilities, and they are configured and deployed in ways that are incredibly complex. “It’s eerily reminiscent of William Gibson's ‘Neuromancer,’” Krebs said. “When he talks about cyberspace, he said ‘the unthinkable complexity,’ and that’s what it's like right now to deploy and manage a large enterprise.” “We are just not sitting in place or standing in place because new technology for emerging on a regular basis,” he said. 



Quote for the day:

"Successful people do what unsuccessful people are not willing to do. Don't wish it were easier; wish you were better." -- Jim Rohn