Showing posts with label GDPR. Show all posts
Showing posts with label GDPR. Show all posts

Daily Tech Digest - May 08, 2026


Quote for the day:

“Everything you’ve ever wanted is on the other side of fear.” -- George Addair

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


How enterprises can manage LLM costs: A practical guide

Managing large language model (LLM) costs has become a critical priority for enterprises as generative and agentic AI deployments scale. According to the InformationWeek guide, LLM expenses are primarily driven by token pricing and consumption, factors that remain notoriously difficult to forecast due to the iterative nature of AI workflows. This unpredictability is exacerbated by dynamic vendor pricing, a lack of specialized FinOps tools, and limited user awareness regarding how complex queries impact the bottom line. To mitigate these financial risks, the article recommends a multi-pronged approach: matching task complexity to model capability by using lower-cost LLMs for routine work, and implementing technical optimizations like response caching and prompt compression to reduce token usage. Furthermore, enterprises should utilize prompt libraries of validated, efficient inputs and leverage query batching for non-urgent tasks to access vendor discounts. While self-hosting models eliminates third-party token fees, the guide warns of significant underlying costs in infrastructure and energy. Ultimately, successful cost management requires a strategic balance where the productivity gains of AI clearly outweigh the operational expenditures. By proactively setting token allowances and comparing vendor rates, CIOs can prevent AI budgets from spiraling while still fostering innovation across the organization.


The Death of the Firewall

The article "The Death of the Firewall" by Chandrodaya Prasad explores why the firewall has survived decades of premature obituaries to remain a cornerstone of modern cybersecurity. Rather than becoming obsolete, the technology has successfully transitioned from a standalone perimeter appliance into a versatile, integrated architecture. The global firewall market continues to expand, currently valued at approximately $6 billion, as organizations face complex security challenges that identity-centric models alone cannot solve. The firewall has evolved through critical phases, including convergence with SD-WAN for simplified networking and integration with cloud-based Security Service Edge (SSE) frameworks. Crucially, it serves as a necessary enforcement point for inspecting encrypted traffic and implementing post-quantum cryptography. It remains indispensable in Operational Technology (OT) sectors, such as manufacturing and healthcare, where legacy systems and IoT devices cannot support endpoint agents or tolerate cloud-based latency. For these heavily regulated industries, the firewall is not merely an architectural choice but a fundamental requirement for regulatory compliance. Ultimately, the firewall’s endurance is attributed to its ongoing adaptation, offloading intelligence to the cloud while maintaining essential local execution. As cyber threats grow more sophisticated due to AI, the firewall is evolving into a vital, persistent component of a unified security fabric.


AI clones: the good, the bad, and the ugly

The Computerworld article "AI clones: The good, the bad, and the ugly" examines the dual-edged nature of digital personas, categorizing their applications into three distinct ethical spheres. Under "the good," the author highlights authorized use cases where public figures like Imran Khan and Eric Adams employ AI voice clones to transcend physical or linguistic barriers, amplifying their reach and accessibility. However, "the bad" introduces the problematic rise of nonconsensual professional cloning. Tools like "Colleague Skill" enable individuals to replicate the expertise and communication styles of coworkers or supervisors, often to retain institutional knowledge or manipulate workplace dynamics. This section also underscores the threat of sophisticated financial fraud perpetrated through voice impersonation. Finally, "the ugly" explores the deeply controversial territory of "Ex-Partner Skill" and "digital resurrection." These tools allow users to simulate interactions with former or deceased loved ones by mimicking subtle nuances and shared memories, raising profound ethical concerns regarding consent and emotional health. Ultimately, the piece argues that as AI cloning technology becomes more accessible, society must navigate the erosion of reality and establish clear boundaries to protect individual identity and privacy in an increasingly synthetic world.


Fire at Dutch data center has many unintended consequences

On May 7, 2026, a significant fire erupted at the NorthC data center in Almere, Netherlands, triggering a regional emergency response and demonstrating the fragility of modern digital infrastructure. The blaze, which originated in the technical compartment housing critical power systems, forced emergency services to order a total power shutdown. Although the server rooms remained largely protected by fire-resistant separations, the resulting outage caused widespread, often bizarre, secondary consequences. Beyond standard digital disruptions, the failure crippled physical security at Utrecht University, where students and staff were locked out of buildings and even restrooms because electronic access card systems failed completely. Public transit in Utrecht faced communication breakdowns, while healthcare billing services and numerous pharmacies across the country saw their operations grind to a halt. This incident serves as a stark wake-up call, proving that even ISO-certified facilities with redundant backups are susceptible to catastrophic failure when authorities prioritize safety over continuity. It underscores a critical lesson for organizations: business continuity plans must account for the unpredictable ripple effects of physical infrastructure loss. The event highlights the inherent risks of centralized digital dependencies, revealing that a localized technical fire can effectively paralyze diverse sectors of society far beyond the immediate flames.


The hidden cost of front-end complexity

The article "The Hidden Cost of Front-End Complexity" explores how modern web development has transitioned from solving rendering challenges to facing profound system design issues. While current frameworks have optimized UI performance and component modularity, complexity has not disappeared; instead, it has shifted "up the stack" into application logic and state coordination. Modern front-end engineers now shoulder responsibilities once reserved for multiple infrastructure layers, managing distributed APIs, CI/CD pipelines, and intricate data flows that reside within the browser. The author argues that the true "hidden cost" of this evolution is the significantly increased cognitive load required for developers to navigate a dense web of invisible dependencies and reactive chains. Consequently, development cycles slow down and maintainability suffers when state relationships remain opaque or poorly defined. To address these architectural failures, the industry must pivot from debating framework syntax or rendering speed to prioritizing a "state-first" architecture. In this paradigm, the UI is treated as a simple projection of a clearly modeled state. By shifting the focus toward explicit state representation and observable system design, engineering teams can manage the inherent complexity of large-scale applications more effectively. Ultimately, the future of the front-end lies in building systems that are fundamentally easier to reason about.


How Federated Identity and Cross-Cloud Authentication Actually Work at Scale

This article discusses the critical shift from traditional, secrets-based authentication to Federated Identity and Workload Identity Federation (WIF) within modern DevOps and multi-cloud environments. Historically, integrating services across clouds (such as Azure, AWS, or GCP) required storing long-lived service principal keys or static credentials, which posed significant security risks including credential leakage and management overhead. To solve this, Federated Identity utilizes OpenID Connect (OIDC) to establish a trust relationship between an external identity provider and a cloud resource. Instead of using persistent secrets, a workload—such as a GitHub Action or an Azure DevOps pipeline—requests a short-lived, ephemeral token from its identity provider. This token is then exchanged for a temporary access token from the target cloud service, which automatically expires after the task is completed. This approach eliminates the need for manual secret rotation and significantly reduces the attack surface by ensuring no permanent credentials exist to be stolen. By leveraging Managed Identities and structured OIDC exchanges, organizations can achieve a "zero-trust" authentication model that scales across diverse cloud providers, providing a more secure, automated, and maintainable framework for cross-cloud resource management and CI/CD workflows.


Ten years later, has the GDPR fulfilled its purpose?

A decade after its adoption, the General Data Protection Regulation (GDPR) presents a bittersweet legacy, having fundamentally reshaped global corporate culture while facing significant modern hurdles. The regulation successfully elevated privacy from a legal footnote to a core management priority, institutionalizing principles like "privacy by design" and establishing a gold standard for international digital governance. However, experts highlight a growing disconnect between regulatory intent and practical application. While the GDPR empowered citizens with theoretical rights, the reality often manifests as "consent fatigue" through ubiquitous cookie pop-ups rather than providing meaningful control. Furthermore, the enforcement landscape reveals a stark gap; despite billions in issued fines, the actual collection rate remains remarkably low due to protracted legal appeals and the complexity of the "one-stop-shop" mechanism. International data transfers also remain a legal Achilles' heel, plagued by ongoing uncertainty across borders. The emergence of generative AI further complicates this framework, as massive training datasets and opaque algorithms challenge core tenets like data minimization and transparency. Additionally, the proliferation of overlapping EU regulations has created a "regulatory avalanche," making compliance increasingly difficult for smaller organizations. Ultimately, the article suggests that while the GDPR fulfilled its primary purpose, it now requires urgent refinement to remain relevant in a complex, AI-driven digital economy.


Bunkers, Mines, and Caverns: The World of Underground Data Centers

The article "Bunkers, Mines, and Caverns: The World of Underground Data Centers" by Nathan Eddy explores the growing strategic niche of subterranean infrastructure through the adaptive reuse of retired mines and Cold War-era bunkers. Predominantly found in North America and Northern Europe, these facilities offer a unique "underground advantage" centered on unparalleled physical security, environmental resilience, and inherent cooling efficiency. By repurposing sites like Iron Mountain’s Pennsylvania campus or Norway’s Lefdal Mine, operators benefit from a natural, impenetrable shield against extreme weather and external threats, making them ideal for high-security or mission-critical workloads. Furthermore, underground locations often bypass local "NIMBY" resistance because they are invisible to surrounding communities. However, the article notes that subterranean deployments present significant engineering and logistical hurdles. Managing humidity, ventilation, and heat dissipation requires complex systems, and retrofitting older structures can be costly. Site selection is also intricate, requiring rigorous assessments of structural stability and risks like water ingress or geological faults. Despite these challenges, underground data centers are no longer a novelty but a proven, permanent fixture in the industry. They are increasingly attractive in land-constrained hubs like Singapore and for highly regulated sectors, providing a sustainable and secure alternative to traditional above-ground facilities.


Why the future of software is no longer written — it is architected, governed and continuously learned

The article argues that software development is undergoing a fundamental structural shift, moving from manual coding to a paradigm defined by architecture, governance, and continuous learning. As generative AI and agentic systems take over the heavy lifting of building code, the role of the developer is evolving into that of an "intelligence orchestrator" who curates intent rather than writing lines of syntax. For CIOs, this transition represents a critical leadership inflection point where software is no longer just a business enabler but the primary engine for scaling enterprise intelligence. The focus is shifting from development speed to the strategic design of decision systems. This new era necessitates the rise of roles like the Chief AI Officer (CAIO) to govern AI as a strategic asset, ensuring security through zero-trust principles and navigating complex regulatory landscapes like the EU AI Act. While productivity gains are significant, organizations must proactively manage risks such as code hallucinations, model bias, and intellectual property concerns. Ultimately, the future of digital economies will be shaped by leaders who prioritize "intelligence orchestration" over traditional application building, fostering adaptive systems that learn and evolve. Success in 2026 requires a focus on three core mandates: architecting intelligence, governing AI assets, and aligning technology ecosystems with overarching corporate strategy.


Maximizing Impact Amid Constraints: The Role of Automation and Orchestration in Federal IT Modernization

Federal IT leaders currently face a challenging landscape where they must fortify complex digital environments against persistent threats while navigating significant fiscal uncertainty and budget constraints. According to a recent report, over sixty percent of these leaders struggle with monitoring tools across diverse hybrid environments, largely due to the persistence of legacy, multi-vendor systems that create integration gaps and increase operational costs. To overcome these hurdles, federal agencies must strategically embrace automation and orchestration as foundational components of a modern zero-trust architecture. By integrating AI-driven technologies for routine tasks like alert analysis and anomaly detection, IT teams can transition from a reactive posture to a proactive defense, effectively reducing monitoring complexity through single-pane-of-glass solutions. This methodical approach allows organizations to maximize the value of their existing investments while freeing up personnel for mission-critical initiatives. The success of such incremental improvements can be clearly measured through enhanced metrics like mean time to detection (MTTD) and mean time to resolution (MTTR). Ultimately, a disciplined, phased implementation of these technologies ensures that federal agencies maintain operational resilience and mission readiness. By focusing on strategic automation, IT leaders can deliver maximum impact for every budget dollar, ensuring that modernization efforts continue to advance despite the ongoing challenges of a resource-constrained environment.

Daily Tech Digest - July 31, 2025


Quote for the day:

"Listening to the inner voice & trusting the inner voice is one of the most important lessons of leadership." -- Warren Bennis


AppGen: A Software Development Revolution That Won't Happen

There's no denying that AI dramatically changes the way coders work. Generative AI tools can substantially speed up the process of writing code. Agentic AI can help automate aspects of the SDLC, like integrating and deploying code. ... Even when AI generates and manages code, an understanding of concepts like the differences between programming languages or how to mitigate software security risks is likely to spell the difference between the ability to create apps that actually work well and those that are disasters from a performance, security, and maintainability standpoint. ... NoOps — short for "no IT operations" — theoretically heralded a world in which IT automation solutions were becoming so advanced that there would soon no longer be a need for traditional IT operations at all. Incidentally, NoOps, like AppGen, was first promoted by a Forrester analyst. He predicted that, "using cloud infrastructure-as-a-service and platform-as-a-service to get the resources they need when they need them," developers would be able to automate infrastructure provisioning and management so completely that traditional IT operations would disappear. That never happened, of course. Automation technology has certainly streamlined IT operations and infrastructure management in many ways. But it has hardly rendered IT operations teams unnecessary.


Middle managers aren’t OK — and Gen Z isn’t the problem: CPO Vikrant Kaushal

One of the most common pain points? Mismatched expectations. “Gen Z wants transparency—they want to know the 'why' behind decisions,” Kaushal explains. That means decisions around promotions, performance feedback, or even task allocation need to come with context. At the same time, Gen Z thrives on real-time feedback. What might seem like an eager question to them can feel like pushback to a manager conditioned by hierarchies. Add in Gen Z’s openness about mental health and wellbeing, and many managers find themselves ill-equipped for conversations they’ve never been trained to have. ... There is a growing cultural narrative that managers must be mentors, coaches, culture carriers, and counsellors—all while delivering on business targets. Kaushal doesn’t buy it. “We’re burning people out by expecting them to be everything to everyone,” he says. Instead, he proposes a model of shared leadership, where different aspects of people development are distributed across roles. “Your direct manager might help you with your day-to-day work, while a mentor supports your career development. HR might handle cultural integration,” Kaushal explains. ... When asked whether companies should focus on redesigning manager roles or reshaping Gen Z onboarding, Kaushal is clear: “Redesign manager roles.”


New AI model offers faster, greener way for vulnerability detection

Unlike LLMs, which can require billions of parameters and heavy computational power, White-Basilisk is compact, with just 200 million parameters. Yet it outperforms models more than 30 times its size on multiple public benchmarks for vulnerability detection. This challenges the idea that bigger models are always better, at least for specialized security tasks. White-Basilisk’s design focuses on long-range code analysis. Real-world vulnerabilities often span multiple files or functions. Many existing models struggle with this because they are limited by how much context they can process at once. In contrast, White-Basilisk can analyze sequences up to 128,000 tokens long. That is enough to assess entire codebases in a single pass. ... White-Basilisk is also energy-efficient. Because of its small size and streamlined design, it can be trained and run using far less energy than larger models. The research team estimates that training produced just 85.5 kilograms of CO₂. That is roughly the same as driving a gas-powered car a few hundred miles. Some large models emit several tons of CO₂ during training. This efficiency also applies at runtime. White-Basilisk can analyze full-length codebases on a single high-end GPU without needing distributed infrastructure. That could make it more practical for small security teams, researchers, and companies without large cloud budgets.


Building Adaptive Data Centers: Breaking Free from IT Obsolescence

The core advantage of adaptive modular infrastructure lies in its ability to deliver unprecedented speed-to-market. By manufacturing repeatable, standardized modules at dedicated fabrication facilities, construction teams can bypass many of the delays associated with traditional onsite assembly. Modules are produced concurrently with the construction of the base building. Once the base reaches a sufficient stage of completion, these prefabricated modules are quickly integrated to create a fully operational, rack-ready data center environment. This “plug-and-play” model eliminates many of the uncertainties in traditional construction, significantly reducing project timelines and enabling customers to rapidly scale their computing resources. Flexibility is another defining characteristic of adaptive modular infrastructure. The modular design approach is inherently versatile, allowing for design customization or standardization across multiple buildings or campuses. It also offers a scalable and adaptable foundation for any deployment scenario – from scaling existing cloud environments and integrating GPU/AI generation and reasoning systems to implementing geographically diverse and business-adjacent agentic AI – ensuring customers achieve maximum return on their capital investment.


‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits

Distillation is a common technique in AI application development. It involves training a smaller “student” model to mimic the outputs of a larger, more capable “teacher” model. This process is often used to create specialized models that are smaller, cheaper and faster for specific applications. However, the Anthropic study reveals a surprising property of this process. The researchers found that teacher models can transmit behavioral traits to the students, even when the generated data is completely unrelated to those traits. ... Subliminal learning occurred when the student model acquired the teacher’s trait, despite the training data being semantically unrelated to it. The effect was consistent across different traits, including benign animal preferences and dangerous misalignment. It also held true for various data types, including numbers, code and CoT reasoning, which are more realistic data formats for enterprise applications. Remarkably, the trait transmission persisted even with rigorous filtering designed to remove any trace of it from the training data. In one experiment, they prompted a model that “loves owls” to generate a dataset consisting only of number sequences. When a new student model was trained on this numerical data, it also developed a preference for owls. 


How to Build Your Analytics Stack to Enable Executive Data Storytelling

Data scientists and analysts often focus on building the most advanced models. However, they often overlook the importance of positioning their work to enable executive decisions. As a result, executives frequently find it challenging to gain useful insights from the overwhelming volume of data and metrics. Despite the technical depth of modern analytics, decision paralysis persists, and insights often fall short of translating into tangible actions. At its core, this challenge reflects an insight-to-impact disconnect in today’s business analytics environment. Many teams mistakenly assume that model complexity and output sophistication will inherently lead to business impact. ... Many models are built to optimize a singular objective, such as maximizing revenue or minimizing cost, while overlooking constraints that are difficult to quantify but critical to decision-making. ... Executive confidence in analytics is heavily influenced by the ability to understand, or at least contextualize, model outputs. Where possible, break down models into clear, explainable steps that trace the journey from input data to recommendation. In cases where black-box AI models are used, such as random forests or neural networks, support recommendations with backup hypotheses, sensitivity analyses, or secondary datasets to triangulate your findings and reinforce credibility.


GDPR’s 7th anniversary: in the AI age, privacy legislation is still relevant

In the years since GDPR’s implementation, the shift from reactive compliance to proactive data governance has been noticeable. Data protection has evolved from a legal formality into a strategic imperative — a topic discussed not just in legal departments but in boardrooms. High-profile fines against tech giants have reinforced the idea that data privacy isn’t optional, and compliance isn’t just a checkbox. That progress should be acknowledged — and even celebrated — but we also need to be honest about where gaps remain. Too often GDPR is still treated as a one-off exercise or a hurdle to clear, rather than a continuous, embedded business process. This short-sighted view not only exposes organisations to compliance risks but causes them to miss the real opportunity: regulation as an enabler. ... As organisations embed AI deeper into their operations, it’s time to ask the tough questions around what kind of data we’re feeding into AI, who has access to AI outputs, and if there’s a breach – what processes we have in place to respond quickly and meet GDPR’s reporting timelines. Despite the urgency, there’s still a glaring gap of organisations that don’t have a formal AI policy in place, which exposes organisations to privacy and compliance risks that could have serious consequences. Especially when data loss prevention is a top priority for businesses.


CISOs, Boards, CIOs: Not dancing Tango. But Boxing.

CISOs overestimate alignment on core responsibilities like budgeting and strategic cybersecurity goals, while boards demand clearer ties to business outcomes. Another area of tension is around compliance and risk. Boards tend to view regulatory compliance as a critical metric for CISO performance, whereas most security leaders view it as low impact compared to security posture and risk mitigation. ... security is increasingly viewed as a driver of digital trust, operational resilience, and shareholder value. Boards are expecting CISOs to play a key role in revenue protection and risk-informed innovation, especially in sectors like financial services, where cyber risk directly impacts customer confidence and market reputation. In India’s fast-growing digital economy, this shift empowers security leaders to influence not just infrastructure decisions, but the strategic direction of how businesses build, scale, and protect their digital assets. Direct CEO engagement is making cybersecurity more central to business strategy, investment, and growth. ... When it comes to these complex cybersecurity subjects, the alignment between CXOs and CISOs is uneven and still maturing. Our findings show that while 53 per cent of CISOs believe AI gives attackers an advantage (down from 70 per cent in 2023), boards are yet to fully grasp the urgency. 


Order Out of Chaos – Using Chaos Theory Encryption to Protect OT and IoT

It turns out, however, that chaos is not ultimately and entirely unpredictable because of a property known as synchronization. Synchronization in chaos is complex, but ultimately it means that despite their inherent unpredictability two outcomes can become coordinated under certain conditions. In effect, chaos outcomes are unpredictable but bounded by the rules of synchronization. Chaos synchronization has conceptual overlaps with Carl Jung’s work, Synchronicity: An Acausal Connecting Principle. Jung applied this principle to ‘coincidences’, suggesting some force transcends chance under certain conditions. In chaos theory, synchronization aligns outcomes under certain conditions. ... There are three important effects: data goes in and random chaotic noise comes out; the feed is direct RTL; there is no separate encryption key required. The unpredictable (and therefore effectively, if not quite scientifically) unbreakable chaotic noise is transmitted over the public network to its destination. All of this is done at the hardware – so, without physical access to the device, there is no opportunity for adversarial interference. Decryption involves a destination receiver running the encrypted message through the same parameters and initial conditions, and using the chaos synchronization property to extract the original message. 


5 ways to ensure your team gets the credit it deserves, according to business leaders

Chris Kronenthal, president and CTO at FreedomPay, said giving credit to the right people means business leaders must create an environment where they can judge employee contributions qualitatively and quantitatively. "We'll have high performers and people who aren't doing so well," he said. "It's important to force your managers to review everyone objectively. And if they can't, you're doing the entire team a disservice because people won't understand what constitutes success." ... "Anyone shying away from measurement is not set up for success," he said. "A good performer should want to be measured because they're comfortable with how hard they're working." He said quantitative measures can be used to prompt qualitative debates about whether, for example, underperformers need more training. ... Stephen Mason, advanced digital technologies manager for global industrial operations at Jaguar Land Rover, said he relies on his talented IT professionals to support the business strategy he puts in place. "I understand the vision that the technology can help deliver," he said. "So there isn't any focus on 'I' or 'me.' Every session is focused on getting the team together and giving the right people the platform to talk effectively." Mason told ZDNET that successful managers lean on experts and allow them to excel.

Daily Tech Digest - February 01, 2025


Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry


5 reasons the enterprise data center will never die

Cloud repatriation — enterprises pulling applications back from the cloud to the data center — remains a popular option for a variety of reasons. According to a June 2024 IDC survey, about 80% of 2,250 IT decision-maker respondents “expected to see some level of repatriation of compute and storage resources in the next 12 months.” IDC adds that the six-month period between September 2023 and March 2024 saw increased levels of repatriation plans “across both compute and storage resources for AI lifecycle, business apps, infrastructure, and database workloads.” ... According to Forrester’s 2023 Infrastructure Cloud Survey, 79% of roughly 1,300 enterprise cloud decision-makers said their firms are implementing internal private clouds, which will use virtualization and private cloud management. Nearly a third (31%) of respondents said they are building internal private clouds using hybrid cloud management solutions such as software-defined storage and API-consistent hardware to make the private cloud more like the public cloud, Forrester adds. ... “Edge is a crucial technology infrastructure that extends and innovates on the capabilities found in core datacenters, whether enterprise- or service-provider-oriented,” says IDC. The rise of edge computing shatters the binary “cloud-or-not-cloud” way of thinking about data centers and ushers in an “everything everywhere all at once” distributed model


How to Understand and Manage Cloud Costs with a Data-Driven Strategy

Understanding your cloud spend starts with getting serious about data. If your cloud usage grew organically across teams over time, you're probably staring at a bill that feels more like a puzzle than a clear financial picture. You know you're paying too much, and you have an idea of where the spending is happening across compute, storage, and networking, but you are not sure which teams are overspending, which applications are being overprovisioned, and so on. Multicloud environments add even another layer of complexity to data visibility. ... With a holistic view of your data established, the next step is augmenting tools to gain a deeper understanding of your spending and application performance. To achieve this, consider employing a surgical approach by implementing specialized cost management and performance monitoring tools that target specific areas of your IT infrastructure. For example, granular financial analytics can help you identify and eliminate unnecessary expenses with precision. Real-time visibility tools provide immediate insights into cost anomalies and performance issues, allowing for prompt corrective actions. Governance features ensure that spending aligns with budgetary constraints and compliance requirements, while integration capabilities with existing systems facilitate seamless data consolidation and analysis across different platforms. 


Top cybersecurity priorities for CFOs

CFOs need to be aware of the rising threats of cyber extortion, says Charles Soranno, a managing director at global consulting firm Protiviti. “Cyber extortion is a form of cybercrime where attackers compromise an organization’s systems, data or networks and demand a ransom to return to normal and prevent further damage,” he says. Beyond a ransomware attack, where data is encrypted and held hostage until the ransom is paid, cyber extortion can involve other evolving threats and tactics, Soranno says. “CFOs are increasingly concerned about how these cyber extortion schemes impact lost revenue, regulatory fines [and] potential payments to bad actors,” he says. ... “In collaboration with other organizational leaders, CFOs must assess the risks posed by these external partners to identify vulnerabilities and implement a proactive mitigation and response plan to safeguard from potential threats and issues.” While a deep knowledge of the entire supply chain’s cybersecurity posture might seem like a luxury for some organizations, the increasing interconnectedness of partner relationships is making third-party cybersecurity risk profiles more of a necessity, Krull says. “The reliance on third-party vendors and cloud services has grown exponentially, increasing the potential for supply chain attacks,” says Dan Lohrmann, field CISO at digital services provider Presidio. 


GDPR authorities accused of ‘inactivity’

The idea that the GDPR has brought about a shift towards a serious approach to data protection has largely proven to be wishful thinking, according to a statement from noyb. “European data protection authorities have all the necessary means to adequately sanction GDPR violations and issue fines that would prevent similar violations in the future,” Schrems says. “Instead, they frequently drag out the negotiations for years — only to decide against the complainant’s interests all too often.” ... “Somehow it’s only data protection authorities that can’t be motivated to actually enforce the law they’re entrusted with,” criticizes Schrems. “In every other area, breaches of the law regularly result in monetary fines and sanctions.” Data protection authorities often act in the interests of companies rather than the data subjects, the activist suspects. It is precisely fines that motivate companies to comply with the law, reports the association, citing its own survey. Two-thirds of respondents stated that decisions by the data protection authority that affect their own company and involve a fine lead to greater compliance. Six out of ten respondents also admitted that even fines imposed on other organizations have an impact on their own company. 


The three tech tools that will take the heat off HR teams in 2025

As for the employee review process, a content services platform enables HR employees to customise processes, routing approvals to the right managers, department heads, and people ops. This means that employee review processes can be expedited thanks to customisable forms, with easier goal setting, identification of upskilling opportunities, and career progression. When paperwork and contracts are uniform, customisable, and easily located, employers are equipped to support their talent to progress as quickly as possible – nurturing more fulfilled employees who want to stick around. ... Naturally, a lot of HR work is form-heavy, with anything from employee onboarding and promotions to progress reviews and remote working requests requiring HR input. However, with a content services platform, HR professionals can route and approve forms quickly, speeding up the process with digital forms that allow employees to enter information quickly and accurately. Going one step further, HR leaders can leverage automated workflows to route forms to approvers as soon as an employee completes them – cutting out the HR intermediary. ... Armed with a single source of truth, HR professionals can take advantage of automated workflows, enabling efficient notifications and streamlining HR compliance processes.


AI Could Turn Against You — Unless You Fix Your Data Trust Issues

Without unified standards for data formats, definitions, and validations, organizations struggle to establish centralized control. Legacy systems, often ill-equipped to handle modern data volumes, further exacerbate the problem. These systems were designed for periodic updates rather than the continuous, real-time streams demanded by AI, leading to inefficiencies and scalability limitations. To address these challenges, organizations must implement centralized governance, quality, and observability within a single framework. This enables them to leverage data lineage and track their data as it moves through systems to ensure transparency and identify issues in real-time. It also ensures they can regularly validate data integrity to support consistent, reliable AI models by conducting real-time quality checks. ... For organizations to maximize the potential of AI, they must embed data trust into their daily operations. This involves using automated systems like data observability to validate data integrity throughout its lifecycle, integrated governance to maintain reliability, and assuring continuous validation within evolving data ecosystems. By addressing data quality challenges and investing in unified platforms, organizations can transform data trust into a strategic advantage. 


Backdoor in Chinese-made healthcare monitoring device leaks patient data

“By reviewing the firmware code, the team determined that the functionality is very unlikely to be an alternative update mechanism, exhibiting highly unusual characteristics that do not support the implementation of a traditional update feature,” CISA said in its analysis report. “For example, the function provides neither an integrity checking mechanism nor version tracking of updates. When the function is executed, files on the device are forcibly overwritten, preventing the end customer — such as a hospital — from maintaining awareness of what software is running on the device.” In addition to this hidden remote code execution behavior, CISA also found that once the CMS8000 completes its startup routine, it also connects to that same IP address over port 515, which is normally associated with the Line Printer Daemon (LPD), and starts transmitting patient information without the device owner’s knowledge. “The research team created a simulated network, created a fake patient profile, and connected a blood pressure cuff, SpO2 monitor, and ECG monitor peripherals to the patient monitor,” the agency said. “Upon startup, the patient monitor successfully connected to the simulated IP address and immediately began streaming patient data to the address.”


3 Considerations for Mutual TLS (mTLS) in Cloud Security

Traditional security approaches often rely on IP whitelisting as a primary method of access control. While this technique can provide a basic level of security, IP whitelists operate on a fundamentally flawed assumption: that IP addresses alone can accurately represent trusted entities. In reality, this approach fails to effectively model real-world attack scenarios. IP whitelisting provides no mechanism for verifying the integrity or authenticity of the connecting service. It merely grants access based on network location, ignoring crucial aspects of identity and behavior. In contrast, mTLS addresses these shortcomings by focusing on cryptographic identity(link is external) rather than network location. ... In the realm of mTLS, identity is paramount. It's not just about encrypting data in transit; it's about ensuring that both parties in a communication are exactly who they claim to be. This concept of identity in mTLS warrants careful consideration. In a traditional network, identity might be tied to an IP address or a shared secret. But, in the modern world of cloud-native applications, these concepts fall short. mTLS shifts the mindset by basing identity on cryptographic certificates. Each service possesses its own unique certificate, which serves as its identity card.


Artificial Intelligence Versus the Data Engineer

It’s worth noting that there is a misconception that AI can prepare data for AI, when the reality is that, while AI can accelerate the process, data engineers are still needed to get that data in shape before it reaches the AI processes and models and we see the cool end results. At the same time, there are AI tools that can certainly accelerate and scale the data engineering work. So AI is both causing and solving the challenge in some respects! So, how does AI change the role of the data engineer? Firstly, the role of the data engineer has always been tricky to define. We sit atop a large pile of technology, most of which we didn’t choose or build, and an even larger pile of data we didn’t create, and we have to make sense of the world. Ostensibly, we are trying to get to something scientific. ... That art comes in the form of the intuition required to sift through the data, understand the technology, and rediscover all the little real-world nuances and history that over time have turned some lovely clean data into a messy representation of the real world. The real skill great data engineers have is therefore not the SQL ability but how they apply it to the data in front of them to sniff out the anomalies, the quality issues, the missing bits and those historical mishaps that must be navigated to get to some semblance of accuracy.


How engineering teams can thrive in 2025

Adopting a "fail forward" mentality is crucial as teams experiment with AI and other emerging technologies. Engineering teams are embracing controlled experimentation and rapid iteration, learning from failures and building knowledge. ... Top engineering teams will combine emerging technologies with new ways of working. They’re not just adopting AI—they’re rethinking how software is developed and maintained as a result of it. Teams will need to stay agile to lead the way. Collaboration within the business and access to a multidisciplinary talent base is the recipe for success. Engineering teams should proactively scenario plan to manage uncertainty by adopting agile frameworks like the "5Ws" (Who, What, When, Where, and Why.) This approach allows organizations to tailor tech adoption strategies and marry regulatory compliance with innovation. Engineering teams should also actively address AI bias and ensure fair and responsible AI deployment. Many enterprises are hiring responsible AI specialists and ethicists as regulatory standards are now in force, including the EU AI Act, which impacts organizations with users in the European Union. As AI improves, the expertise and technical skills that proved valuable before need to be continually reevaluated. Organizations that successfully adopt AI and emerging tech will thrive. 


Daily Tech Digest - March 25, 2024

Two ways to improve GDPR enforcement

Centralised enforcement would certainly add efficiency and consistency to the enforcement process. However, implementation could take years, and even once it’s in place, there’s a risk that member states may disagree about enforcement decisions because one member state could take issue with rulings made by the central enforcement agency. The other foreseeable approach is for the EU to stick with its current decentralised approach to GDPR enforcement, but to invest in measures that would make enforcement more consistent and efficient. ... Developing clearer guidelines about GDPR interpretation would help, too. As a principles-based framework, the GDPR can be overwhelming to interpret, making it challenging for businesses to comply and for enforcement authorities in various countries to determine when a violation has taken place. Centralised interpretation guidance in the form of clarifications about complex GDPR requirements or examples of successful compliance would help ensure more consistent and efficient enforcement of the GPDR, even without a centralised enforcement agency.


How to get your CFO to buy into a better model for IT funding

To ensure persistent teams stay within budget, and thereby reduce risk, it’s crucial that executives understand the fundamental agile principles related to flexible scope and fixed budget. Sometimes, management needs to make a change in direction, and persistent teams allow for this. By using data insights from the quarterly business performance report, the CFO is made aware of situations where the organisation is not tracking towards goals. The executive is then empowered to reprioritise, while still focusing on the ‘why’ or outcome to be delivered. They can change persistent teams’ focus by working with them to swap one initiative for another — rather than asking for additional funding. Making trade-offs means they need to prioritise wisely, as there is a fixed budget to work within. “When there is a change in direction, executives are empowered to make trade-offs to deliver on their needs. It is no longer an ‘ask’ of technology,” says Hubbard, regarding Rest’s use of an agile approach in conjunction with persistent funding. We set up a persistent pilot team at Rest in 2023 to test out the concept. About three months into the six-month pilot, the team uncovered that one of the initiatives wasn’t technically feasible at this time.


7 Tips for Managing Cross-Border Data Transfers

Partners are great for business, but they can misunderstand and make mistakes, too. Their errors can cost your organization as much as its own mistakes can. Take steps to ensure all third parties you work with comply as well. “Increasingly, companies that want to mature and manage their cross-border data transfers are putting in place three-part vendor risk programs that include pre-contract assessments, contractual safeguards model privacy and data protection provisions and data processing addendums (DPAs), and post-contract audits,” says Jim Koenig, a partner at Troutman Pepper and co-chair of its privacy and cyber practice group. The first ensures third parties meet your security requirements and provides an inventory of data transfers. The second -- contractual safeguards model privacy and data protection provisions and DPAs -- “define the specific uses and restrictions on secondary uses, including AI algorithm training, and compliance requirements,” Koenig says. And the last, post-contract audits, “assesses the recipient company’s compliance with the applicable data transfer laws, such as EU GDPR, Saudia Arabia, China’s PIPL and others, and specific contract requirements,” he says.


Getting Ahead of Shadow Generative AI+

Generative AI should help you differentiate what your company does. However, using public LLMs alone will not deliver this, and you will sound the same as everyone else. Companies can make their generative AI strategies more effective and tailored for them and for employees by bringing their own data to the table using retrieval augmented generation, or RAG. RAG takes your own data, gets it ready for use with generative AI, and then passes this data as context into the LLM when your employee asks for a response. RAG is part of solving problems like hallucinations, and it also makes results more relevant for your organization and your customers, rather than getting similar results to other companies that are asking for the same kinds of questions. ... To implement this, you will have to combine various tools from vector data stores and AI integrations to build a RAG stack that makes it easier and faster to get started. Delivering this quickly will help you prevent some of those “off the books” deployments that teams might try to do for themselves while they wait for central IT. 


The state of ransomware: Faster, smarter, and meaner

The pace of innovation on the part of ransomware criminal groups has hit a new high. “In the past two years, we have witnessed a hockey stick curve in the rate of evolution in the complexity, speed, sophistication, and aggressiveness of these crimes,” says John Anthony Smith, CSO and founder of cybersecurity firm Conversant Group. ... “They have combined innovative tactics with complex methods to compromise the enterprise, take it to its knees, and leave it little room to negotiate,” Smith says. One sign of this is that dwell time — the length of time before the first entry to data exfiltration, encryption, backup destruction, or ransom demand — has dramatically shortened. “While it used to take weeks, threat actors are now often completing attacks in as little as four to 48 hours,” says Smith. Another new tactic is that attackers are evading multifactor authentication by using SIM swapping attacks and token capture or taking advantage of MFA fatigue on the part of employees. Once a user authenticates themselves, tokens are used to authenticate further requests so that they don’t have to keep going through the authentication. 


Companies are about to waste billions on AI — here’s how not to become one of them

As you think about saying yes to that next AI project, look at the cost of the needed resources, today and over time, to sustain that project. Ten hours of work from your data science team often has 5X the engineering, DevOps, QA, product and SysOps time buried underneath. Companies are littered with fragments of projects that were once a good idea but lacked ongoing investment to sustain them. Saying no to an AI initiative is hard today, but too frequent yes’ often come at the cost of fully funding the few things worth supporting tomorrow. Another dimension to cost is the increasing marginal cost that AI drives. These large models are costly to train, run and maintain. ... The simplest bets are the ones that better the business you are already in. The old BASF commercial comes to mind: “We don’t make the things you buy, we make the things you buy better.” If the application of AI provides you momentum in the products you already make, that bet is the easiest to make and scale. The second easiest bets are the ones that let you move up and down the value chain or laterally expand to other sectors.


Securing Modern Banking Applications – Do’s and Don’ts

The consumer also plays a pivotal role in the security of their mobile banking. As the device user, consumers and/or employees need to beware of banking applications that ask for tons of accessibility permissions. Granting accessibility permissions without closely looking at what they are requesting can be risky because these permissions can give apps broad control over a device’s functionalities. Banking trojans will often ask for and then exploit accessibility features to automate transactions, capture sensitive data (such as passwords) or overlay fake login screens on legitimate banking apps. Just because the app is legit, consumers should still proceed with caution, knowing that trojans will often use this “preconceived trust” as a launching pad for their destructive attacks. Consumers should also avoid downloading banking apps from unvetted sources, such as third-party app stores that lack the rigorous security controls that actual Apple or Android stores have. Lastly, beware of phishing emails, URLs or texts that look legitimate. Threat actors will often reverse-engineer banking apps to steal logos and other icons to imitate the actual app.


8 cybersecurity predictions shaping the future of cyber defense

By 2028, the adoption of GenAI will collapse the skills gap, removing the need for specialized education from 50% of entry-level cybersecurity positions. GenAI augments will change how organizations hire and teach cybersecurity workers looking for the right aptitude, as much as the right education. Mainstream platforms already offer conversational augments, but will evolve. Gartner recommends cybersecurity teams focus on internal use cases that support users as they work; coordinate with HR partners; and identify adjacent talent for more critical cybersecurity roles. ... By 2026, enterprises combining GenAI with an integrated platforms-based architecture in security behavior and culture programs (SBCP) will experience 40% fewer employee-driven cybersecurity incidents. Organizations are increasingly focused on personalized engagement as an essential component of an effective SBCP. GenAI has the potential to generate hyperpersonalized content and training materials that take into context an employee’s unique attributes. According to Gartner, this will increase the likelihood of employees adopting more secure behaviors in their day-to-day work, resulting in fewer cybersecurity incidents.


Data Security Posture Management in the Education Sector: What You Need to Know

The first and perhaps most crucial step is identifying where all instances of student data reside within your institution. With a best-of-breed DSPM solution, advanced machine learning (ML) and AI can autonomously scan and categorize student data, regardless of where it’s stored (including in structured and unstructured data repositories, email/messaging applications, or cloud or on-premises storage), including its semantic context. It can identify the data, learn its usage patterns, and determine if it’s at risk. This thorough discovery and identification process is also especially important for educational institutions aiming for FERPA compliance. ... The ability to identify and classify sensitive student data puts institutions in a great place, but once identified, any vulnerabilities and risks found must be remediated. Leveraging deep learning, DSPM solutions can compare each data element with baseline security practices used by similar data to detect risk -- even without relying on rules and policies. Even better is to address these access risks in real time -- whether that means remediating access control issues, disabling sensitive file sharing, or blocking an attachment in a messaging platform.


API Security Best Practices That CTOs Can Action Today

The basic function of APIs is to facilitate the exchange of data from one system to another, a process that inherently multiplies potential security risks. The current pace of innovation, with new services, features, and operations being rolled out almost daily, means that several foundational security practices are often overlooked. This oversight can dramatically decrease an organization’s security posture because APIs, by their very design, open up access to data and systems – often beyond the direct control of the organization. This aspect of APIs – the “link” to external entities – is a double-edged sword. While it enables unprecedented levels of interconnectivity and functionality between applications, it also demands that security controls be as robust and comprehensive as those applied to internal access management. However, therein lies the problem: while developers and IT professionals are adept at quickly setting up APIs in the interests of enhancing their services and operations, they often don’t apply the same security standards as they would to strictly internal operations. 



Quote for the day:

"The more I help others to succeed, the more I succeed." -- Ray Croc

Daily Tech Digest - January 13, 2024

Frenemies to friends: Developers and security tools

Cultural shifts happen when security is built into the developer’s existing flow, as opposed to being injected as its own new stage in the pipeline. Look for points in their process where they are already in “pause” or “edit” mode, like at the Pull Request, where you can surface vulnerabilities and ask for remediation efforts. Doing so can avoid context switching and feelings of being interrupted. Capitalizing on an existing developer pause point can help train your developers to look at security vulnerabilities like functionality bugs, a skill they already have, while also shortening feedback loops. ... Developer-to-developer enablement is key. There is often a feeling of mistrust between engineering and security, but developers share the same interests and have the same priorities. Let individual contributors have an opportunity to educate and enable other individual contributors. If you have had a successful pilot or PoC team, or notice self-motivated folks using the tool proactively, give them space to share their experience with the tool. 


The Joys and Pains of DevOps

DevOps is very much a culture change in the way development, operations and even security work together. Even though DevOps aims to improve this, in many cases, these areas still function in silos. There are times when one area implements something that blocks another; and as a DevOps leader, you’re often in the middle trying to figure out the best path forward while also finding an acceptable middle ground. ... A well-engineered DevOps solution should render the team invisible. That includes both the happy path, when deployments succeed, as well as how well you enable teams to solve their deployment issues. There is also one common element of what makes DevOps rewarding: improving developer experience and business outcomes. Dale Francis, director of product development at Climavision, says the rewards of DevOps come from solving problems, so day-to-day operations become simple and the experience for developers better. In addition, maturing as a DevOps organization also lets everyone focus more on solving business problems, rather than fighting technical issues. 


Why Engineering Is Key To A Flourishing Workplace Culture

If your engineering strategy demands precision but your workplace culture tolerates ambiguity and shortcuts, you won't get anywhere. If your engineering strategy demands accountability but your workplace culture doesn't draw connections between an individual's efforts and the higher goals of the operation, you won't get anywhere. If your engineering strategy demands innovation but your workplace culture rewards risk aversion, you won't get anywhere. ... In an arena as complex and technical as engineering, it's easy to lose sight of the human side. Whether your workplace is in-person, remote or hybrid, it's crucial to create spaces (literal or virtual) where employees feel connected and empowered to ask questions. Trust and creativity flourish in an environment where autonomy and authentic connections coexist. ... Inertia is fatal to engineering. Regularly evaluate and adopt new technologies. Find out what your customers need. Find out what hurdles they're up against. Think three steps ahead so your tech stack supports the evolving needs of your business and the market.


Life's Too Short to Work With Incompatible People

Celebrate failure and learn to give feedback. When you embrace failure, you learn and course-correct more quickly. Failure is a sign you're doing something right. You're testing, learning, flexing your creative muscles and moving on efficiently after hitting a brick wall. You must build a team open to feedback to make the most of your failures for the company's good. Feedback is the mode by which we make positive changes out of failure. The challenge? Feedback makes most people cringe. We associate it with criticism as opposed to growth. ... Clear communication may seem like an obvious necessity on high-performing teams, but it's something that's often taken for granted. Unclear communication can quickly tank a team's efforts. A team that has mastered precise communication, on the other hand, can achieve incredible outcomes quickly. We follow an "open book" mentality at Wistia. On all-hands calls, we share candid information about the state of the company – inclusive of the good and the bad – so everyone has the big picture. 


Researchers demo new CI/CD attack techniques in PyTorch supply-chain

Khan initially found a critical vulnerability that could have led to the poisoning of GitHub Actions’ official runner images. The “runners” are the VMs that execute build actions defined inside GitHub Actions workflows. After reporting the vulnerability to GitHub and receiving a $20,000 bug bounty for it, Khan realized that the core issue he found was systemic and that thousands of other repositories were likely impacted. Since then, Khan and Stawinski found vulnerabilities in the software repositories and development infrastructure of major corporations and software projects and collected hundreds of thousands of dollars in rewards through bug bounty programs. Their “victims” included Microsoft Deepspeed, a Cloudflare application, the TensorFlow machine-learning library, the crypto wallets and nodes of several blockchains, and PyTorch, one of the most widely used open-source machine-learning frameworks. PyTorch was originally developed by Meta AI, a subsidiary of Meta, but its development is now governed by the PyTorch Foundation, an independent organization that operates under the Linux Foundation’s umbrella.


For a Secure Foundation, Health Systems Must Address Technical Debt

We need update network equipment, workstations. We may still even have Windows 2003 and 2008. And hardware is not as expensive as the applications that are on there. So that level of technical debt and competing for those dollars where in healthcare you need to have nice offices and that type of thing. So we’re competing with those, with other projects or capital where other organizations may think of that as just an ongoing IT update expense. ... I might hear this stuff at home occasionally, but it’s the same with IT projects. “Hey, we had an acquisition. We got them up and running. We didn’t take care of their technical debt so we’re assuming that.” We’re going through some of those servers now, it’s like, can we even find anybody that knows anything about it, or is it just everyone’s afraid to turn it off? What I like to say is if you didn’t sit around the right campfire, you don’t know the story. So for me, my job sometimes is just to keep asking those questions: “Who knows something about this server?” Sometimes it comes down to the scream test, but I’ve developed a quality, I call it positive persistence. I just keep asking questions politely until we make progress.


The way forward is to make technology 'human-like': Report

As the world undergoes a massive technological transformation, artificial intelligence (AI) and other disruptive technologies will increasingly adopt a more human-like or "Human by Design" approach, according to a new study published on Wednesday. These technologies becoming more human-like and intuitive for people to use, will increasingly lead to a new era of unprecedented productivity and creativity, said the report, titled 'Accenture Technology Vision 2024: Human by design, how AI unleashes the next level of human potential,' which also emphasizes that enterprises that prepare for this shift now will be the winners in the future. The research further highlights that as human-centric technologies continue to advance, they are becoming easier to interact with and more seamlessly integrated into every aspect of our lives. ... As AI, spatial computing, and body-sensing technologies evolve to imitate human capabilities and become less noticeable, the true focus will be on the people who are empowered with new capabilities to achieve what was once considered impossible.


Expert Insight: Andrew Snow on a landmark GDPR ruling

For organisations, it makes clear beyond all doubt that ignorance isn’t an excuse. In fact, if organisations – or managers within them – plead ignorance to the infringement now, they may face a higher fine than if they had taken responsibility for their actions. For regulators, an important precedent has been set. This ruling has provided them with clear direction on where the line falls when deciding on issuing administrative penalties, including fines. For instance, the EDPB [European Data Protection Board] recently reported on another case, involving the Slovak and Hungarian authorities, where there was a dispute over the ownership. The Hungarian regulator ultimately determined that both parties jointly determined the purposes of processing, so were joint controllers – and as such, breached the GDPR because their agreement failed to document this and, by extension, their respective responsibilities. Given the timing of this decision, it probably wasn’t influenced by the ECJ ruling, but I expect that future cases like this would use the ruling as a precedent.


What Are Digital Twins and How Can They Be Used in Healthcare?

Trayanova’s research is on applying personalized digital twin approaches to clinical decision-making. She aims to improve predictive diagnostics and to predict optimal treatment plans for patients. This is currently being used to treat patients with heart rhythm disorders. At Johns Hopkins, Trayanova and her team can create a personalized digital twin representing the geometry of a patient’s heart. The digital twin includes the heart’s structure; disease remodeling such as damage, fibrosis and inflammation identified through MRI or PET scans; and its electrical wave propagation. When an electrical wave propagates to the heart, it triggers a contraction. However, if a patient has scarring or other damage, the wave will catch in that area and, rather than propagating through the heart, it will recirculate and cause an arrythmia. To treat the arrythmia, the digital twin must accurately represent the damage as well as the electrical activity of each cell in the heart. “Now you have something that dynamically links the heart’s components,” Trayanova says. Using the digital twin, she and her team can send a signal and watch how the electrical wave propagates through the model. 


What will the metaverse mean for business models?

In media and entertainment, the primary model of business has evolved from ownership to subscription. In the past, most people bought CDs and DVDs to build a collection – today, owning vinyl is booming in popularity again. But for the majority of people, the accepted model is accessing songs, films and TV series online and building your own virtual library. The difference is that if you stop paying the subscription, you have nothing. Will it be the same in the metaverse? We’ll have to wait and see. But it’s safe to assume that people will want ownership of their assets without paying a subscription (except for the wallet that protects them). To complicate things, there is the question of what role content from Generative AI will play in metaverse business models. Today, it’s generally accepted that no one owns work created by Generative AI. But won't this change? In fact, this assumption may even be wrong – in the UK for example, the law implies that the creators of the AI platform own anything wholly created by it. 



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - November 30, 2023

Super apps: the next big thing for enterprise IT?

Enterprise super apps will allow employers to bundle the apps employees use under one umbrella, he said. This will create efficiency and convenience, where different departments can select only the apps they want, much like a marketplace, to customize their working experiences. Other advantages of super apps for enterprises include providing a more consistent user experience, combating app fatigue and app sprawl, and enhancing security by consolidating functions into one company-managed app. Gartner analyst Jason Wong said the analyst firm is seeing interest in super apps from organizations, including big box stores and other retailers, that have a lot of frontline workers who rely on their mobile devices to do their jobs. One company that has adopted a super app to enhance the experience of its frontline workers and other employees is TeamHealth, a leading physician practice in the US. TeamHealth is using an employee super app from MangoApps, which unifies all the tools and resources employees use daily within one central app.


Meta faces GDPR complaint over processing personal data without 'free consent'

The case centres on whether Meta can legitimately claim to have obtained free consent from its customers to process their data, as required under GDPR, when the only alternative is for customers to pay a substantial fee to opt out of ad-tracking. The complaint will be watched closely by social media companies such as TikTok, which are reported to be considering offering ad-free services to customers outside the US to meet the requirements of European data protection law. Meta denied that it was in breach of European data protection law, citing a European Court of Justice ruling in July 2023 which it said expressly recognised that a subscription model was a valid form of consent for an ad-funded service. Spokesman Matt Pollard referred to a blog post announcing Meta’s subscription model, which stated, “The option for people to purchase a subscription for no ads balances the requirements of European regulators while giving users choice and allowing Meta to continue serving all people in the EU, EEA and Switzerland”.


India’s Path to Cyber Resilience Through DevSecOps

DevSecOps, a collaborative methodology between development, security, and operations, places a strong emphasis on integrating security practices into the software development and deployment processes. In India, the approach has gained substantial traction due to several reasons, including a security-first mindset, adherence to compliance requirements and escalating cybersecurity threats. A survey revealed that the primary business driver for DevSecOps adoption is a keen focus on business agility, achieved through the rapid and frequent delivery of application capabilities, as reported by 59 per cent of the respondents. From a technological perspective, the most significant factor is the enhanced management of cybersecurity threats and challenges, a factor highlighted by 57 per cent of the participants. Businesses now understand the importance of proactive security measures. DevSecOps encourages a security-first mentality, ensuring that security is an integral part of the development process from the outset.


Cybersecurity and Burnout: The Cybersecurity Professional's Silent Enemy

In the world of cybersecurity, where digital threats are a constant, the mental health of professionals is an invaluable asset. Mindfulness not only emerges as a shield against the stress and burnout that pose security risks to organizations, but it also becomes a key strategy to reduce the costs associated with lost productivity and staff turnover. By adopting mindfulness practices and preventing burnout, cybersecurity professionals not only preserve their well-being, but also contribute to a healthier work environment, improve the responsiveness and effectiveness of cybersecurity teams, and ensure the continued success of companies in this critical technology field. Cybersecurity challenges are multidimensional. They cannot be managed in only one dimension. Mindfulness is an essential tool to keep us one step ahead. By recognizing the value of emotional well-being in the fight against cyberattacks, we can build a stronger and more sustainable defense. Cybersecurity is not only a technical issue, but also a human one, and mindfulness presents itself as a key piece in this intricate security puzzle.


Will AI replace Software Engineers?

While AI is automating some tasks previously done by devs, it’s not likely to lead to widespread job losses. In fact, AI is creating new job opportunities for software engineers with the skills and expertise to work with AI. According to a 2022 report by the McKinsey Global Institute, AI is expected to create 9 million new jobs in the United States by 2030. The jobs that are most likely to be lost to AI are those that are routine and repetitive, such as data entry and coding. However, software engineers with the skills to work with AI will be in high demand. ... Embrace AI as a tool to enhance your skills and productivity as a software engineer. While there's concern about AI replacing software engineers, it's unlikely to replace high-value developers who work on complex and innovative software. To avoid being replaced by AI, focus on building sophisticated and creative solutions. Stay up-to-date with the latest AI and software engineering developments, as this field is constantly evolving. Adapt to the changing landscape by acquiring new skills and techniques. Remember that AI and software engineering can collaborate effectively, as AI complements human skills. 


Bridging the risk exposure gap with strategies for internal auditors

Without a strategic view of the future — including a clear-eyed assessment of strengths, weaknesses, opportunities, threats, priorities, and areas of leakage — internal audit is unlikely to recognize actions needed to enable success. There is no bigger threat to organizational success than a misalignment between exponentially increasing risks and a failure to respond due to a lack of vision, resources, or initiative. Create and maintain a good, well-documented strategic plan for your internal audit function. This can help you organize your thinking, force discipline in definitions, facilitate implementation, and continue asking the right questions. Nobody knows for certain what lies ahead, and a well-developed strategic plan is a key tool for preparing for chaos and ambiguity. ... Companies may have less time than they think to prepare for compliance, and internal auditors should be supporting their organizations in getting the right enabling processes and technologies in place as soon as possible. This will require a continuing focus on breaking down silos and improving how internal audit collaborates with its risk and compliance colleagues. 


Generative AI in the Age of Zero-Trust

Enter generative AI. Generative AI models generate content, predictions, and solutions based on vast amounts of available data. They’re making waves not just for their ‘wow’ factor, but for their practical applications. It’s only natural that employees would gravitate to the latest technology offering the ability to make them more efficient. For cybersecurity, this means potential tools that offer predictive threat analysis based on patterns, provide automatic code fixes, dynamically adjust policies in response to evolving threat landscapes and even automatically respond to active attacks. If used correctly, generative AI can shoulder some of the burdens of the complexities that have built up over the course of the zero-trust era. But how can you trust generative AI if you are not in control of the data that trains it? You can’t, really. ... This is forcing organizations to start setting generative AI policies. Those that choose the zero-trust path and ban its use will only repeat the mistakes of the past. Employees will find ways around bans if it means getting their job done more efficiently. Those who harness it will make a calculated tradeoff between control and productivity that will keep them competitive in their respective markets.


Organizations Must Embrace Dynamic Honeypots to Outpace Attackers

There are a number of ways in which AI-powered honeypots are superior to their static counterparts. The first is that because they can independently evolve, they can become far more convincing through automatic evolution. This sidesteps the problem of constantly making manual adjustments to present the honeypot as a realistic facsimile. Secondly, as the AI learns and develops, it will become far more adept at planting traps for unwary attackers, meaning that hackers will not only have to go slower than usual to try and avoid said traps but once one is triggered, it will likely provide far richer data to defense teams about what attackers are clicking on, the information they’re after, how they’re moving across the site. Finally, using AI tools to design honeypots means that, under the right circumstances, even tangible assets can be turned into honeypots. ... Therefore, having tangible assets such as honeypots allows defense teams to target their energy more efficiently and enables the AI to learn faster, as there will likely be more attackers coming after a real asset than a fake one.


Almost all developers are using AI despite security concerns, survey suggests

Many developers place far too much trust in the security of code suggestions from generative AI, the report noted, despite clear evidence that these systems consistently make insecure suggestions. “The way that code is generated by generative AI coding systems like Copilot and others feels like magic," Maple said. "When code just appears and functionally works, people believe too much in the smoke and mirrors and magic because it appears so good.” Developers can also value machine output over their own talents, he continued. "There’s almost an imposter syndrome," he said. ... Because AI coding systems use reinforcement learning algorithms to improve and tune results when users accept insecure open-source components embedded in suggestions, the AI systems are more likely to label those components as secure even if this is not the case, it continued. This risks the creation of a feedback loop where developers accept insecure open-source suggestions from AI tools and then those suggestions are not scanned, poisoning not only their organization’s application code base but the recommendation systems for the AI systems themselves, it explained.


Former Uber CISO Speaks Out, After 6 Years, on Data Breach, SolarWinds

Sullivan says the key mistake he made was not bringing in third-party investigators and counsel to review how his team handled the breach. "The thing we didn't do was insist that we bring in a third party to validate all of the decisions that were made," he says. "I hate to say it, but it's more CYA." Now, Sullivan advises other CISOs and companies about navigating their responsibilities in disclosing breaches, especially as the new Securities & Exchange Commission (SEC) incident reporting requirements are set to take effect. Sullivan says he welcomes the new regulations. "I think anything that pushes towards more transparency is a good thing," he says. He recalls that when he was on former President Barack Obama's Commission on Enhancing National Cybersecurity, Sullivan was pushing to give companies immunity if they are transparent early on during security incidents. That hasn't happened until now, according to Sullivan, who says the jury is still out on the new regulations, which will require action starting in December.



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein