Showing posts with label continual learning. Show all posts
Showing posts with label continual learning. Show all posts

Daily Tech Digest - May 08, 2026


Quote for the day:

“Everything you’ve ever wanted is on the other side of fear.” -- George Addair

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


How enterprises can manage LLM costs: A practical guide

Managing large language model (LLM) costs has become a critical priority for enterprises as generative and agentic AI deployments scale. According to the InformationWeek guide, LLM expenses are primarily driven by token pricing and consumption, factors that remain notoriously difficult to forecast due to the iterative nature of AI workflows. This unpredictability is exacerbated by dynamic vendor pricing, a lack of specialized FinOps tools, and limited user awareness regarding how complex queries impact the bottom line. To mitigate these financial risks, the article recommends a multi-pronged approach: matching task complexity to model capability by using lower-cost LLMs for routine work, and implementing technical optimizations like response caching and prompt compression to reduce token usage. Furthermore, enterprises should utilize prompt libraries of validated, efficient inputs and leverage query batching for non-urgent tasks to access vendor discounts. While self-hosting models eliminates third-party token fees, the guide warns of significant underlying costs in infrastructure and energy. Ultimately, successful cost management requires a strategic balance where the productivity gains of AI clearly outweigh the operational expenditures. By proactively setting token allowances and comparing vendor rates, CIOs can prevent AI budgets from spiraling while still fostering innovation across the organization.


The Death of the Firewall

The article "The Death of the Firewall" by Chandrodaya Prasad explores why the firewall has survived decades of premature obituaries to remain a cornerstone of modern cybersecurity. Rather than becoming obsolete, the technology has successfully transitioned from a standalone perimeter appliance into a versatile, integrated architecture. The global firewall market continues to expand, currently valued at approximately $6 billion, as organizations face complex security challenges that identity-centric models alone cannot solve. The firewall has evolved through critical phases, including convergence with SD-WAN for simplified networking and integration with cloud-based Security Service Edge (SSE) frameworks. Crucially, it serves as a necessary enforcement point for inspecting encrypted traffic and implementing post-quantum cryptography. It remains indispensable in Operational Technology (OT) sectors, such as manufacturing and healthcare, where legacy systems and IoT devices cannot support endpoint agents or tolerate cloud-based latency. For these heavily regulated industries, the firewall is not merely an architectural choice but a fundamental requirement for regulatory compliance. Ultimately, the firewall’s endurance is attributed to its ongoing adaptation, offloading intelligence to the cloud while maintaining essential local execution. As cyber threats grow more sophisticated due to AI, the firewall is evolving into a vital, persistent component of a unified security fabric.


AI clones: the good, the bad, and the ugly

The Computerworld article "AI clones: The good, the bad, and the ugly" examines the dual-edged nature of digital personas, categorizing their applications into three distinct ethical spheres. Under "the good," the author highlights authorized use cases where public figures like Imran Khan and Eric Adams employ AI voice clones to transcend physical or linguistic barriers, amplifying their reach and accessibility. However, "the bad" introduces the problematic rise of nonconsensual professional cloning. Tools like "Colleague Skill" enable individuals to replicate the expertise and communication styles of coworkers or supervisors, often to retain institutional knowledge or manipulate workplace dynamics. This section also underscores the threat of sophisticated financial fraud perpetrated through voice impersonation. Finally, "the ugly" explores the deeply controversial territory of "Ex-Partner Skill" and "digital resurrection." These tools allow users to simulate interactions with former or deceased loved ones by mimicking subtle nuances and shared memories, raising profound ethical concerns regarding consent and emotional health. Ultimately, the piece argues that as AI cloning technology becomes more accessible, society must navigate the erosion of reality and establish clear boundaries to protect individual identity and privacy in an increasingly synthetic world.


Fire at Dutch data center has many unintended consequences

On May 7, 2026, a significant fire erupted at the NorthC data center in Almere, Netherlands, triggering a regional emergency response and demonstrating the fragility of modern digital infrastructure. The blaze, which originated in the technical compartment housing critical power systems, forced emergency services to order a total power shutdown. Although the server rooms remained largely protected by fire-resistant separations, the resulting outage caused widespread, often bizarre, secondary consequences. Beyond standard digital disruptions, the failure crippled physical security at Utrecht University, where students and staff were locked out of buildings and even restrooms because electronic access card systems failed completely. Public transit in Utrecht faced communication breakdowns, while healthcare billing services and numerous pharmacies across the country saw their operations grind to a halt. This incident serves as a stark wake-up call, proving that even ISO-certified facilities with redundant backups are susceptible to catastrophic failure when authorities prioritize safety over continuity. It underscores a critical lesson for organizations: business continuity plans must account for the unpredictable ripple effects of physical infrastructure loss. The event highlights the inherent risks of centralized digital dependencies, revealing that a localized technical fire can effectively paralyze diverse sectors of society far beyond the immediate flames.


The hidden cost of front-end complexity

The article "The Hidden Cost of Front-End Complexity" explores how modern web development has transitioned from solving rendering challenges to facing profound system design issues. While current frameworks have optimized UI performance and component modularity, complexity has not disappeared; instead, it has shifted "up the stack" into application logic and state coordination. Modern front-end engineers now shoulder responsibilities once reserved for multiple infrastructure layers, managing distributed APIs, CI/CD pipelines, and intricate data flows that reside within the browser. The author argues that the true "hidden cost" of this evolution is the significantly increased cognitive load required for developers to navigate a dense web of invisible dependencies and reactive chains. Consequently, development cycles slow down and maintainability suffers when state relationships remain opaque or poorly defined. To address these architectural failures, the industry must pivot from debating framework syntax or rendering speed to prioritizing a "state-first" architecture. In this paradigm, the UI is treated as a simple projection of a clearly modeled state. By shifting the focus toward explicit state representation and observable system design, engineering teams can manage the inherent complexity of large-scale applications more effectively. Ultimately, the future of the front-end lies in building systems that are fundamentally easier to reason about.


How Federated Identity and Cross-Cloud Authentication Actually Work at Scale

This article discusses the critical shift from traditional, secrets-based authentication to Federated Identity and Workload Identity Federation (WIF) within modern DevOps and multi-cloud environments. Historically, integrating services across clouds (such as Azure, AWS, or GCP) required storing long-lived service principal keys or static credentials, which posed significant security risks including credential leakage and management overhead. To solve this, Federated Identity utilizes OpenID Connect (OIDC) to establish a trust relationship between an external identity provider and a cloud resource. Instead of using persistent secrets, a workload—such as a GitHub Action or an Azure DevOps pipeline—requests a short-lived, ephemeral token from its identity provider. This token is then exchanged for a temporary access token from the target cloud service, which automatically expires after the task is completed. This approach eliminates the need for manual secret rotation and significantly reduces the attack surface by ensuring no permanent credentials exist to be stolen. By leveraging Managed Identities and structured OIDC exchanges, organizations can achieve a "zero-trust" authentication model that scales across diverse cloud providers, providing a more secure, automated, and maintainable framework for cross-cloud resource management and CI/CD workflows.


Ten years later, has the GDPR fulfilled its purpose?

A decade after its adoption, the General Data Protection Regulation (GDPR) presents a bittersweet legacy, having fundamentally reshaped global corporate culture while facing significant modern hurdles. The regulation successfully elevated privacy from a legal footnote to a core management priority, institutionalizing principles like "privacy by design" and establishing a gold standard for international digital governance. However, experts highlight a growing disconnect between regulatory intent and practical application. While the GDPR empowered citizens with theoretical rights, the reality often manifests as "consent fatigue" through ubiquitous cookie pop-ups rather than providing meaningful control. Furthermore, the enforcement landscape reveals a stark gap; despite billions in issued fines, the actual collection rate remains remarkably low due to protracted legal appeals and the complexity of the "one-stop-shop" mechanism. International data transfers also remain a legal Achilles' heel, plagued by ongoing uncertainty across borders. The emergence of generative AI further complicates this framework, as massive training datasets and opaque algorithms challenge core tenets like data minimization and transparency. Additionally, the proliferation of overlapping EU regulations has created a "regulatory avalanche," making compliance increasingly difficult for smaller organizations. Ultimately, the article suggests that while the GDPR fulfilled its primary purpose, it now requires urgent refinement to remain relevant in a complex, AI-driven digital economy.


Bunkers, Mines, and Caverns: The World of Underground Data Centers

The article "Bunkers, Mines, and Caverns: The World of Underground Data Centers" by Nathan Eddy explores the growing strategic niche of subterranean infrastructure through the adaptive reuse of retired mines and Cold War-era bunkers. Predominantly found in North America and Northern Europe, these facilities offer a unique "underground advantage" centered on unparalleled physical security, environmental resilience, and inherent cooling efficiency. By repurposing sites like Iron Mountain’s Pennsylvania campus or Norway’s Lefdal Mine, operators benefit from a natural, impenetrable shield against extreme weather and external threats, making them ideal for high-security or mission-critical workloads. Furthermore, underground locations often bypass local "NIMBY" resistance because they are invisible to surrounding communities. However, the article notes that subterranean deployments present significant engineering and logistical hurdles. Managing humidity, ventilation, and heat dissipation requires complex systems, and retrofitting older structures can be costly. Site selection is also intricate, requiring rigorous assessments of structural stability and risks like water ingress or geological faults. Despite these challenges, underground data centers are no longer a novelty but a proven, permanent fixture in the industry. They are increasingly attractive in land-constrained hubs like Singapore and for highly regulated sectors, providing a sustainable and secure alternative to traditional above-ground facilities.


Why the future of software is no longer written — it is architected, governed and continuously learned

The article argues that software development is undergoing a fundamental structural shift, moving from manual coding to a paradigm defined by architecture, governance, and continuous learning. As generative AI and agentic systems take over the heavy lifting of building code, the role of the developer is evolving into that of an "intelligence orchestrator" who curates intent rather than writing lines of syntax. For CIOs, this transition represents a critical leadership inflection point where software is no longer just a business enabler but the primary engine for scaling enterprise intelligence. The focus is shifting from development speed to the strategic design of decision systems. This new era necessitates the rise of roles like the Chief AI Officer (CAIO) to govern AI as a strategic asset, ensuring security through zero-trust principles and navigating complex regulatory landscapes like the EU AI Act. While productivity gains are significant, organizations must proactively manage risks such as code hallucinations, model bias, and intellectual property concerns. Ultimately, the future of digital economies will be shaped by leaders who prioritize "intelligence orchestration" over traditional application building, fostering adaptive systems that learn and evolve. Success in 2026 requires a focus on three core mandates: architecting intelligence, governing AI assets, and aligning technology ecosystems with overarching corporate strategy.


Maximizing Impact Amid Constraints: The Role of Automation and Orchestration in Federal IT Modernization

Federal IT leaders currently face a challenging landscape where they must fortify complex digital environments against persistent threats while navigating significant fiscal uncertainty and budget constraints. According to a recent report, over sixty percent of these leaders struggle with monitoring tools across diverse hybrid environments, largely due to the persistence of legacy, multi-vendor systems that create integration gaps and increase operational costs. To overcome these hurdles, federal agencies must strategically embrace automation and orchestration as foundational components of a modern zero-trust architecture. By integrating AI-driven technologies for routine tasks like alert analysis and anomaly detection, IT teams can transition from a reactive posture to a proactive defense, effectively reducing monitoring complexity through single-pane-of-glass solutions. This methodical approach allows organizations to maximize the value of their existing investments while freeing up personnel for mission-critical initiatives. The success of such incremental improvements can be clearly measured through enhanced metrics like mean time to detection (MTTD) and mean time to resolution (MTTR). Ultimately, a disciplined, phased implementation of these technologies ensures that federal agencies maintain operational resilience and mission readiness. By focusing on strategic automation, IT leaders can deliver maximum impact for every budget dollar, ensuring that modernization efforts continue to advance despite the ongoing challenges of a resource-constrained environment.

Daily Tech Digest - February 19, 2025


Quote for the day:

"Go confidently in the direction of your dreams. Live the life you have imagined." -– Henry David Thoreau


Why Observability Needs To Go Headless

Not all logs have long-term value, but that’s one of the advantages of headless observability and decoupled storage. Teams have the freedom and flexibility to determine which logs should be retained for longer periods. Web application firewall (WAF) and other security logs can be retained over the long term and made available to cybersecurity teams and threat hunters. Other application logs can provide long-term insights into how resources are being used for capacity planning and anomaly detection. Let’s take a closer look at a real, tangible use case where observability data can be valuable for other teams: real user monitoring (RUM). In the realm of observability, RUM allows teams to proactively monitor how end users are experiencing web applications. Issues like slow page loads can be mitigated before they frustrate users. Beyond observability, RUM data can also provide insights into how your end users are interacting with your brand and your products. This data is invaluable for marketing, advertising and leadership teams that need to plan strategy. ... As a real-world example, many enterprises use CDN log data for real user monitoring. In the short term, monitoring CDNs is important for ensuring good user experiences and fast loading times of digital assets. However, being able to retain huge volumes of log data long term and cost-effectively provides certain advantages to enterprises.


Why the CIO role should be split in two

The fact is that within enterprises, existing architecture is overly complex, often including new digital systems interconnected with legacy systems. This ‘hybrid’ architecture is a combination of best and bad practice. When there is an outage, the new digital platforms can invariably be restored to recover business process support. But because they do not operate in isolation, instead connecting with legacy technologies, business operations themselves may not fully recover if the legacy systems continue to be impacted by the outage. For most enterprises stuck in this hybrid state, the way forward is to be more discipline around architecture. ... Simplifying architecture at an enterprise level is something the CIO and CISO should work together concurrently as a shared goal. The benefits of doing so will accrue over time rather than immediately, hence there can be some reluctance to prioritize. ... What does all this have to do with my opening discussion about the CIO and complementary IT executive roles? Splitting the CIO role into smaller and smaller pieces would be okay if doing so led to better outcomes. But I would argue that examples like the ones above show that the multiple-exec approach is not a success story we should be bragging about. In this structure, the two CIOs would share ownership of the IT strategy. 


Generative AI vs. the software developer

AI is not going to turn your customer support people (Elvis bless them) into senior software developers. A customer support person might be able to think “I need to track the connection between items in inventory, the customer’s shopping cart, and the discount pricing for a given item,” but unless that person also knows how to code, they will have a seriously hard time instructing an AI model to generate the code they need. Most likely, they aren’t going to know if the code the AI produces even runs, let alone works correctly. But AI can help actual developers in many ways. It can look at existing code you have written and help you produce the next thing that you need to write. It can even write large routines and classes that you ask it to. But it is not going to create the things you need without you having a large say in what that is. You need to know how to craft a prompt to get precisely what is needed. ... Now, that prompt will be pretty effective in getting what is asked for. But the trick here, obviously, is that you have to know what a React component is, what Tailwind is, the fact that you want tests, what TypeScript is, what null is, and that you’d even need to handle missing values. There is a lot of knowledge and experience wrapped up in that prompt, and it’s not something that an inexperienced developer, or certainly a non-developer, would be able to write.


Beyond the Screen: Humanising Digital Learning

Digital learning holds a lot of promise, aiming to bring the most dynamic and engaging elements of in-person training into the digital space. Interactive tools like quizzes, breakout rooms, and mini-tasks demonstrate just how far we’ve come in replicating real-world engagement online. However, we continue to see issues with retention and follow through. Recent research shows that 66% of employees still find on-the-job learning to be more effective than formal online courses. This disconnect often stems from a lack of deep, meaningful engagement. Without it, employees are less likely to retain knowledge or apply their skills effectively in the workplace. This is particularly crucial when it comes to human skills—broader soft skills like communication, emotional intelligence, and critical thinking. Unlike technical skills that are typically learned ‘by the book’, softer skills are learned and applied every day. The solution lies in moving beyond passive consumption to real-world, interactive learning simulations. ... The shift to digital learning offers incredible potential, but realising that potential requires a thoughtful approach. By embracing AI-powered technologies and prioritising interactive, personalised and bite-sized content, organisations can create learning experiences that are engaging, practical and transformative.


Shadow AI: How unapproved AI apps are compromising security, and what you can do about it

Shadow AI introduces significant risks, including accidental data breaches, compliance violations and reputational damage. It’s the digital steroid that allows those using it to get more detailed work done in less time, often beating deadlines. Entire departments have shadow AI apps they use to squeeze more productivity into fewer hours. “I see this every week,” Vineet Arora, CTO at WinWire, recently told VentureBeat. “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore.” ... “If you paste source code or financial data, it effectively lives inside that model,” Golan warned. Arora and Golan find companies training public models defaulting to using shadow AI apps for a wide variety of complex tasks. Once proprietary data gets into a public-domain model, more significant challenges begin for any organization. It’s especially challenging for publicly held organizations that often have significant compliance and regulatory requirements. Golan pointed to the coming EU AI Act, which “could dwarf even the GDPR in fines,” and warns that regulated sectors in the U.S. risk penalties if private data flows into unapproved AI tools. There’s also the risk of runtime vulnerabilities and prompt injection attacks that traditional endpoint security and data loss prevention (DLP) systems and platforms aren’t designed to detect and stop.


Think being CISO of a cybersecurity vendor is easy? Think again

When people in this industry hear that a CISO is working at a cybersecurity vendor, it can trigger a number of assumptions — many of them misguided. There’s a stereotype that the role isn’t “real” CISO work, that it’s more akin to being a field CISO, someone primarily outward-facing and focused on supporting sales or amplifying the brand. The assumption goes something like this: How hard can it be to secure a security company, and isn’t the “real” work done at companies outside of this bubble? ... Some might think that working at a security company limits your perspective of what’s out there in the broader industry, but I found the opposite to be true. I gained a deeper understanding of how organizations evaluate security solutions and what they truly care about. I saw firsthand the challenges customers faced when implementing security tools, and that experience gave me empathy, insight, and a renewed ability to speak their language. Now that I’m back in industry, I’m bringing that perspective with me. The transition wasn’t a step “down” or a shift away from anything; it was just the next phase in my career. Security leadership is security leadership, no matter where you practice it. The challenges remain complex, the responsibilities remain vast, and the importance of aligning security with business outcomes remains paramount.


Lack of regulations, oversight in health care IT can cause harm

Increasingly, health care organizations have outsourced their health IT infrastructure to companies owned and operated by private equity, venture capital and Big Tech firms that view them as platforms to experiment with unproven AI and machine-learning tools. "The unregulated integration of AI tools into these systems will make it even harder to protect patients' rights," Appelbaum said. "Moreover, because these records contain so much information and are centralized, they are among the most lucrative targets for cyberattacks and hackers," Batt said, noting that in 2024, data breaches exposed the health records of more than 200 million Americans. As a result, health care organizations must now invest billions more in cybersecurity systems owned and operated by venture capital, private equity and Big Tech. The authors argue that the federal government is once again behind in setting safeguards for the adoption of new health IT, and that the lessons from 30 years of attempts to set adequate standards for information-sharing in electronic health systems—as detailed in these reports—should spur regulators to act quickly and rein in unregulated financial activities in health IT. Batt explained, "The history of the health IT implementation and the lack of sufficient regulatory oversight and enforcement of standards should give us great pause for the current enthusiasm over the adoption of AI and machine learning in health information systems."


The Future of Data: How Decision Intelligence is Revolutionizing Data

Decision Intelligence is an interdisciplinary field that uses AI to enhance all aspects of decision-making across all areas of a Business. It blends concepts of Data Science (statistics, machine learning, AI, analytics) with Behavioral Sciences (psychology, neuroscience, economics, and managerial sciences) to understand how decisions are made and how outcomes are measured. ... Decision Intelligence (DI) can be considered a subset where it uses AI to build a reliable data foundation by collecting, organizing, and connecting data and then applying AI and analytics to turn that data into useful insights for better decision-making. In short, while AI provides the technology to mimic human intelligence, DI focuses on applying that technology to improve how decisions are made. ... You can use any of your machine learning models, like regression models, classification models, time series forecasting models, clustering algorithms, or reinforcement learning for implementing Decision Intelligence. These machine learning will help identify patterns in the data and make predictions based on those patterns, but decision intelligence will take that information one step further by incorporating it into a broader framework that can actively guide the decision-making process by considering the predictions and the potential outcomes and consequences of different choices.


ManpowerGroup exec explains how to manage an AI workforce

It’s not just a technology anymore. We are looking for individuals that have the industry experience. We can take somebody with industry experience and train them on the technical part of the job. “It’s a lot harder for us to take somebody with the technical skills and teach them how the industry works. I think there’s a focus on looking at the soft skills: the problem solving, the complex reasoning ability, and communications. Because it’s not just developing AI for the sake of software technology; it’s to address that larger business problem. It’s about looking at all of the business functions, and taking all of that into consideration. ... The problem is [that] the gap is getting wider between those employees who understand AI technology and are willing to learn more about it and those who don’t want to have anything to do with it. But I think everybody will be a technologist, eventually. It’s going to be talent augmented by technology. ... “There are so many things, and it’s happening so fast. So, we are still learning as fast as we can. We’re trying to understand what the impact of AI will be, and how it will change our business models. Even from a talent organization like ours, which is providing global talent solutions, what does that do for us? Now, our company is going to start looking for your talent plus the AI agents you’ll need. So AI becomes part of a hiring solution. 


Debunking the AI Hype: Inside Real Hacker Tactics

While headlines are trumpeting AI as the one-size-fits-all new secret weapon for cybercriminals, the statistics—again, so far—are telling a very different story. In fact, after poring over the data, Picus Labs found no meaningful upswing in AI-based tactics in 2024. Yes, adversaries have started incorporating AI for efficiency gains, such as crafting more credible phishing emails or creating/ debugging malicious code, but they haven't yet tapped AI's transformational power in the vast majority of their attacks so far. In fact, the data from the Red Report 2025 shows that you can still thwart the majority of attacks by focusing on tried-and-true TTPs. ... Attackers are increasingly targeting password stores, browser-stored credentials, and cached logins, leveraging stolen keys to escalate privileges and spread within networks. This threefold jump underscores the urgent need for ongoing and robust credential management combined with proactive threat detection. Modern infostealer malware orchestrates multi-stage style heists blending stealth, automation, and persistence. With legitimate processes cloaking malicious operations and actual day-to-day network traffic hiding nefarious data uploads, bad actors can exfiltrate data right under your security team's proverbial nose, no Hollywood-style "smash-and-grab" needed. Think of it as the digital equivalent of a perfectly choreographed burglary. 

Daily Tech Digest - January 13, 2025

Artificial intelligence is optimising the entire M&A lifecycle by providing data-driven insights at every stage to enable informed decisions. Companies considering a merger or acquisition can use AI to understand market trends, performance of past deals, and other events of relevance to decide the way forward. On the potential candidates, big data, analytics and AI algorithms help process vast corporate information from a variety of sources – financial statements, analyst briefings, media reports, and more– to identify acquisition targets meeting their requirements. AI augment the experts in due diligence performing complex financial modelling or reviewing extensive legal documents, conduct risk analysis with higher accuracy at a fraction of the time, compared to existing methods. ... For the legacy enterprise system, at times replacing with a cloud-based solution, organisations can become operational within six to fourteen months, depending on size, which is much faster than the time taken in a traditional on-premise scenario. ... Differences in the merging companies’ technology architectures, tools and configurations, make it extremely challenging to ascertain M&A security posture accurately, completely, and on time, even if the organisations are already on the same cloud.


Time for a change: Elevating developers’ security skills

With detection and remediation tools trivializing code security in the same environments they trained with, it’s not unreasonable to think that junior engineers could maintain the ability to perform this basic task as well as maintain an understanding of the risks and consequences of the vulnerabilities they create as they draft code. For mid-level engineers, given the increased security proficiency earlier in their careers, it can now be expected that it’s their responsibility to necessitate code security with their engineers, before it is even reviewed by senior developers. ... For this effort, developers get a pretty substantial boost to their skill set with this deepened security knowledge, which can be very valuable given the current state of affairs for hiring cybersecurity professionals with a dearth of talent available, growing backlogs, and increasing cybersecurity risks in number and scope. Most importantly, they can achieve it without sacrificing productivity – detecting and remediating vulnerabilities can be done as easily as spellcheck finds spelling errors, and training can be short and tailored to what they’re working on, all within the integrated development environment (IDE) they work in every day. ... In addition, organizations can finally achieve the vision of true shift-left by integrating security into every level of the SDLC and adopt the culture of security they’ve rightly been clamoring for.


How Your Digital Footprint Fuels Cyberattacks — and What to Do About It

If you are like most of us, you have been using digital services for years not realizing that you have been giving hackers access to the details of your personal life. On social media, we voluntarily share PII about who we are and where we are, using the location check-in features. ... Reducing your digital footprint doesn’t have to mean going off the grid. Here are some practical steps you can take — Use separate emails for different accounts: Don’t rely on one email for everything. This minimizes the damage if one account is hacked — it won’t lead hackers to all your other services. Review privacy settings regularly: Many apps have default settings that overshare your information. For instance, on apps like Strava or Telegram, you can turn off location tracking and limit who can contact you or add you to conversations. A quick check of these settings can significantly reduce your exposure. Avoid saving passwords in web browsers: Browsers prioritize convenience, not security. Instead, use a password manager. These tools securely store your passwords and can generate strong, unique ones for each account. This reduces the risk of malware or phishing attacks stealing your credentials directly from your browser. Think before you post: Share less on social media, especially in real time. This will make you harder to track and target.


What is career catfishing, the Gen Z strategy to irk ghosting corporates?

After slogging through the exhausting process of job hunting — submitting countless applications, enduring endless rounds of interviews, and anxiously waiting for updates from unresponsive hiring managers — Gen Z workers have found a way to reclaim the balance of power. The rising trend, dubbed “career catfishing,” involves Gen Zs (those aged 27 and under) accepting job offers only to never show up on their first day. According to a survey by CV Genius, which polled 1,000 UK employees across generations, approximately 34 per cent of Zoomers admitted to engaging in career catfishing. ... Gen Z alone cannot shoulder the blame for the rise of such behaviours. Office ghosting — where one party cuts off communication without notice — is now a common phenomenon. ... Managers and owners identified entitlement, motivation, lack of effort, and productivity as reasons for terminating Gen Z employees. Some even referred to them as the snowflake generation and claimed they were too easily offended, which further justified their dismissal. The practice of career catfishing could further reinforce these stereotypes, making it even harder for young professionals to build trust with potential employers.


The next AI wave — agents — should come with warning labels

AI agents that use unclean data can introduce errors, inconsistencies, or missing values that make it difficult for the model to make accurate predictions or decisions. If the dataset has missing values for certain features, for instance, the model might incorrectly assume relationships or fail to generalize well to new data. An agent could also draw data from individuals without consent or use data that’s not anonymized properly, potentially exposing personally identifiable information. Large datasets with missing or poorly formatted data can also slow model training and cause it to consume more resources, making it difficult to scale the system. In addition, while AI agents must also comply with the European Union’s AI Act and similar regulations, innovation will quickly outpace those rules. Businesses must not only ensure compliance but also manage various risks, such as misrepresentation, policy overrides, misinterpretation, and unexpected behavior. “These risks will influence AI adoption, as companies must assess their risk tolerance and invest in proper monitoring and oversight,” according to a Forrester Research report — “The State Of AI Agents” — published in October. 


Euro-cloud Anexia moves 12,000 VMs off VMware to homebrew KVM platform

“We used to pay for VMware software one month in arrears,” he said. “With Broadcom we had to pay a year in advance with a two-year contract.” That arrangement, the CEO said, would have created extreme stress on company cashflow. “We would not be able to compete with the market,” he said. “We had customers on contracts, and they would not pay for a price increase.” Windbichler considered legal action, but felt the fight would have been slow and expensive. Anexia therefore resolved to migrate, a choice made easier by its ownership of another hosting business called Netcup that ran on a KVM-based platform. Another factor in the company’s favour was that it disguised the fact it ran VMware with an abstraction layer it called “Anexia Engine” that meant customers never saw Virtzilla’s wares and instead worked in a different interface to manage their VM fleets. ... The CEO thinks more companies will move from VMware. “I do not believe Broadcom will be successful,” he told The Register. “They lost all the trust. I have talked to so many VMware customers and they say they cannot work with a company like that.” Regulators are also interested in Broadcom’s practices, he said.


Preparing for AI regulation: The EU AI Act

Among the uses of AI that are banned under Article 5 are AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques. Article 5 also prohibits the use of AI systems that exploit any of the vulnerabilities of a person or a specific group of people due to their age, disability, or a specific social or economic situation. Systems that analyse social behaviours and then use this information in a detrimental way are also prohibited under Article 5 if their use goes beyond the original intent of the data collection. Other areas covered by Article 5 include the use of AI systems in law enforcement and biometrics. Industry observers describe the act as a “risk-based” approach to regulating artificial intelligence. ... Organisations operating in the EU will need to take into account CSRD. Given the power-hungry nature of machine learning and AI inference, the extent to which AI is used may well be influenced by such regulations going forward. While it builds on existing regulations, as Mélanie Gornet and Winston Maxwell note in the Hal Open Science paper The European approach to regulating AI through technical standards, the AI Act takes a different route from these. Their observation is that the EU AI Act draws inspiration from European product safety rules.


Enterprise Data Architecture: A Decade of Transformation and Innovation

Privacy and compliance drive architectural decisions. The One Identity Graph we developed manages complex customer relationships while ensuring CCPA and GDPR compliance. This graph-based solution has prevented data breaches and reduced regulatory risks by implementing automated data lineage tracking, consent management, and real-time data masking. These features reinforce customer trust through transparent data handling and granular access controls. The business impact proves substantial. The platform’s real-time fraud detection analyzes transaction patterns across multiple channels, preventing fraudulent activities before completion. It optimizes inventory dynamically across thousands of locations by simultaneously processing point-of-sale data, supply chain updates, and external market factors. Supply chain disruptions trigger immediate alerts through a sophisticated event correlation engine, enabling preventive action before customer impact. Edge computing represents the next frontier. Processing data closer to its source minimizes latency, critical for IoT applications and real-time decisions. Our implementation reduces data transfer costs by 40% while improving response times for customer-facing applications. 


AI is set to transform education — what enterprise leaders can learn from this development

While AI tools show immense promise in addressing resource constraints, their adoption raises broader questions about the role of human connection in learning. Which brings us back to Unbound Academy. Students will spend two hours online each school morning working through AI-driven lessons in math, reading, and science. Tools like Khanmigo and IXL will personalize the instruction and analyze progress, adjusting the difficulty and content in real-time to optimize learning outcomes. The Charter application asserts that “this ensures that each student is consistently challenged at their optimal level, preventing boredom or frustration.” Unbound Academy’s model significantly reduces the role of human teachers. Instead, human “guides” provide emotional support and motivation while also leading workshops on life skills. What will students lose by spending most of their learning time with AI instead of human instructors, and how might this model reshape the teaching profession? The Unbound Academy model is already used in several private schools and the results they have obtained are used to substantiate the advantages it claims. ... For any of this to happen, the industry needs action that matches the rhetoric.


6 ways continuous learning can advance your career

Joys said thinking critically is about learning how a new idea or innovation might be translated into the current organizational context. "At the end of the day, the company is writing a paycheck for you," he said. "Think about how new stuff provides business value." Joys said professionals also need to ensure the benefits of the things they introduce through their learning processes are tracked and traced. "That's about measuring those efforts to ensure you can say, 'Here's a new piece of technology. Here's how we'll measure how this technology lines up with our corporate strategy and vision.'" ... Worsley told ZDNET he likes to learn on the job rather than acquire new knowledge in the classroom. "I'm not a bookish person. I don't go out and read. I recognize that I need to learn specific things because I've got a problem to solve," he said. "I'll learn about it, get the right people talking, and get the solutions underway. Tell me something's impossible and I'll tell you it's not." ... Keith Woolley, chief digital and information officer at the University of Bristol, said the great thing about his job is that it's like a hobby. "I'm naturally interested in what I do. So, I read things around me without realizing I'm consuming other information," he said. "If you're excited about what you do, learning comes naturally because it's a genuine interest. Then learning happens when you don't expect it."



Quote for the day:

"Doing what you love is the cornerstone of having abundance in your life." -- Wayne Dyer

Daily Tech Digest - November 07, 2024

Keep Learning or Keep Losing: There's No Finish Line

Traditional training and certifications are a starting point, but they're often not enough to prepare professionals for real-world challenges. Current research supports a need for cybersecurity education to be interactive, with practical approaches that deepen both engagement and understanding. ... For cybersecurity professionals, a commitment to lifelong learning is a career advantage. Those who prioritize continuous education stand out, not only because they keep pace with industry advancements but also because they demonstrate a proactive mindset valued by employers. Embracing lifelong learning positions professionals for growth, higher responsibility and leadership opportunities within their organizations. Organizations that foster a culture of continuous learning create an environment in which employees feel empowered and supported in their growth. These organizations often find they retain talent longer and perform better in crisis situations because their teams are both knowledgeable and resilient. By prioritizing ongoing education, companies can cultivate a workforce that's agile, engaged and better prepared to face cyberthreats head-on. In cybersecurity, the question isn't whether you'll keep learning - it's how you'll keep learning. 


Top 5 security mistakes software developers make

“A very common practice is the lack of or incorrect input validation,” Tanya Janca, who is writing her second book on application security and has consulted for many years on the topic, tells CSO. Snyk also has blogged about this, saying that developers need to “ensure accurate input validation and that the data is syntactically and semantically correct.” Stackhawk wrote, “always make sure that the backend input is validated and sanitized properly.” ... One aspect of lax authentication has to do with what is called “secrets sprawl,” the mistake of using hard-coded credentials in the code, including API and encryption keys and login passwords. Git Guardian tracks this issue and found that almost every breach exposing such secrets remained active for at least five days after the software’s author was notified. They found that a tenth of open-source authors leaked a secret, which amounts to bad behavior of about 1.7 million developers. ... But there is a second issue that goes to understanding security culture so you can make the right choices of tools that will actually get deployed by your developers. Jeevan Singh blogs about this issue, mentioning that you have to start small and not just go shopping for everything all at once, “so as not to overwhelm your engineering organization with huge lists of vulnerabilities. ..."


There is No Autonomous Network Without Observability

One of the best things about observability is how it strengthens network resilience. Downtime can not only damage your reputation and frustrate your customers; it is also flat-out expensive. Observability helps you spot vulnerabilities before they become major issues. With real-time insights, you can jump in and make fixes before they lead to downtime or degraded performance. Plus, observability works hand-in-hand with AI-driven assurance systems. By constantly monitoring performance, these systems diligently look for patterns that might hint at future problems. They can make proactive adjustments, which cut down on the need for manual intervention. The result? A network that is more self-reliant, adaptive, and able to keep running smoothly. Observability doesn’t just stop there—it also steps up your security game. With threat detection built into every layer of the network, observability helps your network identify and deal with security issues in real time, making it not just self-healing but self-securing. ... Today’s networks are not confined to one domain anymore. We are working with multi-domain networks that tie together radio, transport, and cloud technologies. That creates a massive amount of data, and managing that data in real time is a challenge. 


Building a better future: The enterprise architect’s role in leading organizational transformation

Architects bring unique capabilities that make them well-suited for leadership roles in an evolving business landscape. Their core strength lies in aligning technology with business goals. This keeps innovation and growth interconnected. Unlike traditional executives, architects have a holistic view of both domains, allowing them to see the big picture and drive meaningful change. With deep technical expertise, architects can navigate complex systems, platforms, and infrastructures. But their strategic thinking sets them apart—they don’t just focus on technology in isolation. They understand how it drives business value, enabling them to make informed decisions that benefit both the organization and its customers. Moreover, architects are natural collaborators. They excel at bridging gaps between different business units, fostering cross-functional teams, and ensuring integrated solutions that work for the entire organization. This ability to collaborate across departments makes them ideal for leadership in a world that values adaptability, inclusivity, and alignment over rigid command structures. The shift from a ‘command and control’ leadership mode to one of ‘align and collaborate’ is transforming how organizations are managed. 


How ‘Cheap Fakes’ Exploit Our Psychological Vulnerabilities

Cheap fakes exploit a range of psychological vulnerabilities, like fear, greed, and curiosity. These vulnerabilities make social engineering attacks prevalent across the board -- over two-thirds of data breaches involve a human element -- but cheap fakes are particularly effective at leveraging them. This is because many people are unable to identify manipulated media, particularly when it aligns with their preconceptions and existing biases. According to a study published in Science, false news spreads much faster than accurate information on social media. Researchers found several explanations for this phenomenon: false news tends to be more novel than the truth, and the stories elicited “fear, disgust, and surprise in replies.” Cheap fakes rely on these emotions to spread quickly and capture victims’ attention -- they create inflammatory imagery, aim to increase political and social division, and often present fragments of authentic content to produce the illusion of legitimacy. At a time when cheap fakes and deepfakes are rapidly proliferating, IT teams must emphasize a core principle of cybersecurity: Verify before you trust. Employees should be taught to doubt their initial reactions to digital content, particularly when that content is sensational, coercive, or divisive.... 


Cloud vs. On-Prem: Comparing Long-Term Costs

You’ve seen many reports of companies saving millions of dollars by moving a portion or majority of their workloads out of the cloud. When leaving the cloud becomes financially viable, the price point will depend on your workload, business requirements, and other factors, but here are some basic guidelines to consider. Big cloud providers have historically made moving all your data out of their cloud cost-prohibitive. Saving millions of dollars on computing will not make sense if it costs millions to move your data. ... You would have to reduce your cloud spend by 90-96% to save as much money as buying hardware. Reserved instances and spots may save money, but never that much. Budgeting hardware and collocation space will be easier to engineer and more predictable for your long-term projected spending. Spending this much money also means you are likely continuously upgrading based on your cloud provider’s upgrade requirements. You will frequently upgrade operating systems, database versions, Kubernetes clusters, and serverless runtimes. And you have no agency to delay them until it works best for your business. But saving people’s costs isn’t the only benefit. A frequent phrase when using the cloud is “opportunity cost.” 


Data Center Regulation Trends to Watch in 2025

Governments are increasingly focused on creating new or updated regulations to strengthen digital resiliency and cybersecurity because of the growing importance of IT in critical services, rising geopolitical tensions, explosion of cyberattacks and increased outsourcing to cloud, according to the Uptime Institute. EU’s DORA requires the finance industry to establish a risk management framework, which includes business continuity and disaster recovery plans that include data backup and recovery; incident reporting; digital operational resilience testing; information sharing of cyber threats with other financial institutions; and managing the risk of their third-party information and communications technology (ICT) providers, such as cloud providers. “You’ve got to make sure your data center is robust, resilient, and that it doesn’t go down. And if it does go down, you’re responsible for it,” said Rahiel Nasir, IDC’s associate research director of European Cloud and lead analyst of worldwide digital sovereignty. Financial businesses will have to ensure their third-party providers meet regulatory requirements by negotiating it into their contracts. As a result, both the finance sector and their service providers will need to implement the tools and procedures necessary to comply with DORA, an IDC report said.


How AI will shape the next generation of cyber threats

In essence, AI turns advanced attack strategies into point-and-click operations, removing the need for deep technical knowledge. Attackers won’t need to write custom code or conduct in-depth research to exploit vulnerabilities. Instead, AI systems will analyze target environments, find weaknesses and even adapt attack patterns in real time without requiring much input from the user. This shift greatly widens the pool of potential attackers. Organizations that have traditionally focused on defending against nation-state actors and professional hacker groups will now have to contend with a much broader range of threats. Eventually, AI will empower individuals with limited tech knowledge to execute attacks rivaling those of today’s most advanced adversaries. To stay ahead, defenders must match this acceleration with AI-powered defenses that can predict, detect and neutralize threats before they escalate. In this new environment, success will depend not just on reacting to attacks but on anticipating them. Organizations will need to adopt predictive AI capabilities that can evolve alongside the rapidly shifting threat landscape, staying one step ahead of attackers who now have unprecedented power at their fingertips.


Navigating Privacy and Ethics in the Military use of AI

The report articulates the importance of integrating data governance into the development and deployment of military AI systems, and stresses that as military AI becomes increasingly central to national defense, so too does the need for clear, ethical, and transparent practices surrounding the data used to train these systems. “Data plays a critical role in the training, testing, and use of artificial intelligence, including in the military domain,” the report says, emphasizing that “research and development for AI-enabled military solutions is proceeding at breakneck speed” and therefore “the important role data plays in shaping these technologies have implications and, at times, raises concerns.” The report says “these issues are increasingly subject to scrutiny and range from difficulty in finding or creating training and testing data relevant to the military domain, to (harmful) biases in training data sets, as well as their susceptibility to cyberattacks and interference (for example, data poisoning),” and points out that “pathways and governance solutions to address these issues remain scarce and very much underexplored.” Afina and Sarah Grand-Clément said the risk of data breaches or unauthorized access to military data also is a critical concern. 


AI in Cybersecurity: Balancing Innovation with Risk

Generative AI has advanced to a point where it can produce unique, grammatically sound, and contextually relevant content. Cybercriminals utilise this technology to create convincing phishing emails, text messages, and other forms of communication that mimic legitimate interactions. Unlike traditional phishing attempts, which often exhibit suspicious language or grammatical errors, AI-generated content can evade detection and manipulate targets more effectively. Furthermore, AI can produce deepfake videos or audio recordings that convincingly impersonate trusted individuals, increasing the likelihood of successful scams. ... AI, particularly Machine Learning (ML) and deep learning, can be instrumental in detecting suspicious activities and identifying abnormal patterns in network traffic. AI can establish a baseline of normal behavior by analysing vast datasets, including traffic trends, application usage, browsing habits, and other network activity. This baseline can serve as a guide for spotting anomalies and potential threats. AI’s ability to process large volumes of data in real-time means it can flag suspicious activities faster and more accurately, enabling immediate remediation and minimising the chances of a successful cyberattack. 



Quote for the day:

“It’s better to look ahead and prepare, than to look back and regret.” -- Jackie Joyner Kersee

Daily Tech Digest - July 16, 2024

Learning cloud cost management the hard way

The rapid adoption of cloud technologies has outpaced the development of requisite skills within many organizations, leading to inefficiencies in provisioning, managing, and optimizing cloud resources. The No. 1 excuse that I hear from those overspending on cloud computing is that they can’t find the help they need to maximize cloud resources. They are kicking years of cloud-powered technical debt down the road, hoping that someone or some tool will come along to fix everything. ... Automation tools powered by AI can play a crucial role in ensuring that resources are only provisioned when needed and decommissioned when not in use, thus preventing idle resources from unnecessarily accruing costs. Moreover, a robust cost governance framework is essential for cloud cost management. This framework should include policies for resource provisioning, usage monitoring, and cost optimization. ... It’s frustrating that we’ve yet to learn how to do this correctly. 2020 wants their cloud spending problems back. This is not the only survey I’ve seen that reveals cost inefficacies on a massive scale. I see this myself.


The Interplay Of IoT And Critical Infrastructure Security

The vast number of interconnected devices in an IoT-driven infrastructure creates a massive attack surface. These objects often have limited processing power and may miss out on robust security features, which potentially makes them easy targets. ... The silver lining is that such heterogeneous cabling architectures can be glued together to deliver fiber-grade connectivity without the need to build new high-cost networks from scratch. An illustration of this tactic is Actelis’ hybrid-fiber technology harnessing high-performance managed Ethernet access switches and extenders to make the most of existing network infrastructures and provide gigabit speeds via virtually any wireline media. Actelis’ hybrid-fiber networking concept includes sections of fiber (for the easy-to-reach-with-fiber locations) and copper/coax that can be upgraded with Actelis’ technology to run fiber-grade communication. The company does both and provides management, security, and end-to-end integration for such entire networks, including fiber parts. This is important, as it represents a significant part of the market, selling both fiber and non-fiber networking.


MIT Researchers Introduce Generative AI for Databases

The researchers noticed that SQL didn’t provide an effective way to incorporate probabilistic AI models, but at the same time, approaches that use probabilistic models to make inferences didn’t support complex database queries. They built GenSQL to fill this gap, enabling someone to query both a dataset and a probabilistic model using a straightforward yet powerful formal programming language. A GenSQL user uploads their data and probabilistic model, which the system automatically integrates. Then, she can run queries on data that also get input from the probabilistic model running behind the scenes. This not only enables more complex queries but can also provide more accurate answers. For instance, a query in GenSQL might be something like, “How likely is it that a developer from Seattle knows the programming language Rust?” ... Incorporating a probabilistic model can capture more complex interactions. Plus, the probabilistic models GenSQL utilizes are auditable, so people can see which data the model uses for decision-making. In addition, these models provide measures of calibrated uncertainty along with each answer.


Securing Applications in an Evolving API Security Landscape

Adding further fuel to the fire, threat actors constantly innovate, developing new techniques to target APIs. This includes exploiting zero-day vulnerabilities, leveraging stolen credentials through phishing attacks, or even using bots to automate brute-force attacks against API endpoints. Traditionally, API security focused on reactive measures, patching vulnerabilities and detecting attacks after they happened. However, GenAI allows attackers to automate tasks, churning out mass phishing campaigns or crafting malicious code specifically designed to exploit API weaknesses. These attacks, known for their speed and volume, easily overwhelm traditional security solutions designed for static environments. ... APIs are crucial for modern business operations, driving innovation and customer interactions. However, without proper security measures, APIs can be vulnerable to attacks that can put sensitive data at risk, disrupt operations, and damage customer trust. ... This underscores the real-world impact of inadequate API security, including delayed innovation, frustrated customers, and lost revenue. 


Fundamentals of Descriptive Analytics

Descriptive analytics helps to describe and present data in a format that can be easily understood by a wide variety of business readers. Descriptive analytics rarely attempts to investigate or establish cause-and-effect relationships. As this form of analytics doesn’t usually probes beyond surface analysis, the validity of results is more easily implemented. Some common methods employed in descriptive analytics are observations, case studies, and surveys. ... The main disadvantage of descriptive analytics is that it only reports what has happened in the past or what is happening now without explaining the root causes behind the observed behaviors or without predicting what is about to happen in the future. The analysis is generally limited to few variables and their relationships. However, descriptive analytics becomes a powerful business resource when it is combined with other types of analytics for assessing business performance. While descriptive analytics focuses on reporting past or current events, the other types of analytics explore the root causes behind observed trends and can also predict future outcomes based on historical data analysis.


Strategies for creating seamless and personalised customer journeys

One cannot overlook the fact that CX in financial services is crucial since it influences customer satisfaction levels. In turn, this depends on providing seamless experiences across all touchpoints, beginning with the initial contact, right up to the final delivery and the consumption of financial products or services. Significantly, CX in financial services goes beyond merely offering excellent customer service. Instead, it involves ensuring frictionless, personalised experiences that surpass expectations across every stage of the customer’s journey. Be it banking, insurance or investment services, the ongoing digital transformation has substantially enhanced the expectations of customers. Today, speed, convenience and personalisation are taken for granted. As a result, financial entities that provide superior CX enjoy a clear competitive advantage in both attracting and retaining their customers, leading to increased profitability. ... Navigating complex regulatory environments ensures financial integrity. Regulations uphold standards, protecting consumers. Compliant firms earn trust, enhancing reputation alongside prioritizing customer experience for competitiveness in the market.


Cultivating Continuous Learning: Transforming L&D for the Hybrid Future and Beyond

Adapting L&D for the Digital Age requires strategic initiatives that cater to evolving demands in today's dynamic landscape. This entails embracing technology-driven learning platforms and tools that facilitate remote access, personalised learning paths, and real-time analytics. By leveraging these innovations, organisations can ensure that learning experiences are agile, responsive, and tailored to individual needs. Integrating digital collaboration tools fosters a culture of continuous learning and knowledge sharing across geographies, enhancing organisational agility and competitiveness in a digital-first world. Future-proofing L&D involves identifying and developing critical skills that will drive success in tomorrow's workplace. This proactive approach requires foresight into emerging trends and industry shifts, preparing employees with skills such as adaptability, digital literacy, creativity, and emotional intelligence. Implementing forward-thinking training programs and certifications ensures that employees remain adept and resilient in the face of technological advancements and market disruptions.


Cybersecurity Can Be a Businesses Enabler

Cybersecurity controls and protective mechanisms can protect an organization's assets - its data, people, technology equipment, etc. By actively protecting assets and preventing data breaches, an organization can avoid potential negative business impact, financial or otherwise. And because the organization does not have to worry about that potential damage, it can operate in a safe and focused fashion. ... Suffering a negative security event indicates some gap or deficiency in an organization's security posture. All organizations have security gaps, but some never report a negative security event - probably because they have invested more resources in differentiating themselves from their competitors. Implementing strong protective measures shows customers that an organization takes security seriously, which makes it a more appealing business partner. ... Customers and partners are becoming more aware of cyber risks, and they prioritize cybersecurity when they consider engaging in business. By implementing effective cybersecurity measures, a company can improve the confidence that potential customers and partners have in it. Over time, this will lead to increased loyalty and trust.


What is transformational leadership? A model for motivating innovation

The most important thing you can do as a transformational leader is to lead by example. Employees will look to you as a role model for behavior in all areas of the workplace. If you lead with authenticity, employees will pick up on that behavior and feel inspired to maintain that high standard for performance. It’s not about manipulating employees into working hard, it’s about leading by example and positively influencing others through a commitment to trust, transparency, and respect. ... To help create change, it’s important to challenge long-standing beliefs in the company and push the status quo by encouraging innovation, creativity, critical thinking, and problem-solving. Transformative leaders should help employees feel comfortable exploring new ideas and opportunities that can inject innovation into the organization. ... As a transformational leader, you will need to encourage your team to feel attached and committed to the vision of the organization. You want to ensure employees feel as committed to these goals as you do as a leader by giving employees a strong sense purpose, rather than attempting to motivate them through fear.


How Post-Quantum Cryptography Can Ensure Resilience

Quantum computing represents a major threat to data security, as it can make attacks against cryptography much more efficient. There are two ways bad actors could use this technology. One is the “Store now, decrypt later” method, in which cybercriminals steal sensitive data and wait until quantum computers have the ability to break its encryption. This is particularly important for you to know if your organization retains data with a long confidentiality span. The other method is to break the data’s digital signatures. A bad actor could “compute” credentials based on publicly available information, then impersonate someone with the authority to sign documents or approve requests. As with the above message, criminals can do this retroactively if older signatures are not updated. Today’s encryption methods cannot stand against the capabilities of tomorrow’s quantum computers. When large-scale quantum computers are built, they will have the computing ability to decrypt many of the current public key cryptography systems. 



Quote for the day:

"The only limit to our realization of tomorrow will be our doubts of today." -- Frank D Roosevelt