Showing posts with label big data. Show all posts
Showing posts with label big data. Show all posts

Daily Tech Digest - August 13, 2025


Quote for the day:

“You don’t lead by pointing and telling people some place to go. You lead by going to that place and making a case.” -- Ken Kesey


9 things CISOs need know about the dark web

There’s a growing emphasis on scalability and professionalization, with aggressive promotion and recruitment for ransomware-as-a-service (RaaS) operations. This includes lucrative affiliate programs to attract technically skilled partners and tiered access enabling affiliates to pay for premium tools, zero-day exploits or access to pre-compromised networks. It’s fragmenting into specialized communities that include credential marketplaces, exploit exchanges for zero-days, malware kits, and access to compromised systems, and forums for fraud tools. Initial access brokers (IABs) are thriving, selling entry points into corporate environments, which are then monetized by ransomware affiliates or data extortion groups. Ransomware leak sites showcase attackers’ successes, publishing sample files, threats of full data dumps as well as names and stolen data of victim organizations that refuse to pay. ... While DDoS-for-hire services have existed for years, their scale and popularity are growing. “Many offer free trial tiers, with some offering full-scale attacks with no daily limits, dozens of attack types, and even significant 1 Tbps-level output for a few thousand dollars,” Richard Hummel, cybersecurity researcher and threat intelligence director at Netscout, says. The operations are becoming more professional and many platforms mimic legitimate e-commerce sites displaying user reviews, seller ratings, and dispute resolution systems to build trust among illicit actors.


CMMC Compliance: Far More Than Just an IT Issue

For many years, companies working with the US Department of Defense (DoD) treated regulatory mandates including the Cybersecurity Maturity Model Certification (CMMC) as a matter best left to the IT department. The prevailing belief was that installing the right software and patching vulnerabilities would suffice. Yet, reality tells a different story. Increasingly, audits and assessments reveal that when compliance is seen narrowly as an IT responsibility, significant gaps emerge. In today’s business environment, managing controlled unclassified information (CUI) and federal contract information (FCI) is a shared responsibility across various departments – from human resources and manufacturing to legal and finance. ... For CMMC compliance, there needs to be continuous assurance involving regularly monitoring systems, testing controls and adapting security protocols whenever necessary. ... Businesses are having to rethink much of their approach to security because of CMMC requirements. Rather than treating it as something to be handed off to the IT department, organizations must now commit to a comprehensive, company-wide strategy. Integrating thorough physical security, ongoing training, updated internal policies and steps for continuous assurance mean companies can build a resilient framework that meets today’s regulatory demands and prepares them to rise to challenges on the horizon.


Beyond Burnout: Three Ways to Reduce Frustration in the SOC

For years, we’ve heard how cybersecurity leaders need to get “business smart” and better understand business operations. That is mostly happening, but it’s backwards. What we need is for business leaders to learn cybersecurity, and even further, recognize it as essential to their survival. Security cannot be viewed as some cost center tucked away in a corner; it’s the backbone of your entire operation. It’s also part of an organization’s cyber insurance – the internal insurance. Simply put, cybersecurity is the business, and you absolutely cannot sell without it. ... SOCs face a deluge of alerts, threats, and data that no human team can feasibly process without burning out. While many security professionals remain wary of artificial intelligence, thoughtfully embracing AI offers a path toward sustainable security operations. This isn’t about replacing analysts with technology. It’s about empowering them to do the job they actually signed up for. AI can dramatically reduce toil by automating repetitive tasks, provide rapid insights from vast amounts of data, and help educate junior staff. Instead of spending hours manually reviewing documents, analysts can leverage AI to extract key insights in minutes, allowing them to apply their expertise where it matters most. This shift from mundane processing to meaningful analysis can dramatically improve job satisfaction.


7 legal considerations for mitigating risk in AI implementation

AI systems often rely on large volumes of data, including sensitive personal, financial and business information. Compliance with data privacy laws is critical, as regulations such as the European Union’s General Data Protection Regulation, the California Consumer Privacy Act and other emerging state laws impose strict requirements on the collection, processing, storage and sharing of personal data. ... AI systems can inadvertently perpetuate or amplify biases present in training data, leading to unfair or discriminatory outcomes. This risk is present in any sector, from hiring and promotions to customer engagement and product recommendations. ... The legal framework surrounding AI is evolving rapidly. In the U.S., multiple federal agencies, including the Federal Trade Commission and Equal Employment Opportunity Commission, have signaled they will apply existing laws to AI use cases. AI-specific state laws, including in California and Utah, have taken effect in the last year. ... AI projects involve unique intellectual property questions related to data ownership and IP rights in AI-generated works. ... AI systems can introduce new cybersecurity vulnerabilities, including risks related to data integrity, model manipulation and adversarial attacks. Organizations must prioritize cybersecurity to protect AI assets and maintain trust.


Forrester’s Keys To Taming ‘Jekyll and Hyde’ Disruptive Tech

“Disruptive technologies are a double-edged sword for environmental sustainability, offering both crucial enablers and significant challenges,” explained the 15-page report written by Abhijit Sunil, Paul Miller, Craig Le Clair, Renee Taylor-Huot, Michele Pelino, with Amy DeMartine, Danielle Chittem, and Peter Harrison. “On the positive side,” it continued, “technology innovations accelerate energy and resource efficiency, aid in climate adaptation and risk mitigation, monitor crucial sustainability metrics, and even help in environmental conservation.” “However,” it added, “the necessary compute power, volume of waste, types of materials needed, and scale of implementing these technologies can offset their benefits.” ... “To meet sustainability goals with automation and AI,” he told TechNewsWorld, “one of our recommendations is to develop proofs of concept for ‘stewardship agents’ and explore emerging robotics focused on sustainability.” When planning AI operations, Franklin Manchester, a principal global industry advisor at SAS, an analytics and artificial intelligence software company in Cary, N.C., cautioned, “Not every nut needs to be cracked with a sledgehammer.” “Start with good processes — think lean process mapping, for example — and deploy AI where it makes sense to do so,” he told TechNewsWorld.


5 Key Benefits of Data Governance

Data governance processes establish data ethics, a code of behavior providing a trustworthy business climate and compliance with regulatory requirements. The IAPP calculates that 79% of the world’s population is now protected under privacy regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). This statistic highlights the importance of governance frameworks for risk management and customer trust. ... Data governance frameworks recognize data governance roles and responsibilities and streamline processes so that corporate-wide communications can improve. This systematic approach sets up businesses to be more agile, increasing the “freedom to innovate, invest, or hunker down and focus internally,” says O’Neal. For example, Freddie Mac developed a solid data strategy that streamlined data governance communications and later had the level of buy-in for the next iteration. ... With a complete picture of business activities, challenges, and opportunities, data governance creates the flexibility to respond quickly to changing needs. This allows for better self-service business intelligence, where business users can gather multi-structured data from various sources and convert it into actionable intelligence.


Architecture Lessons from Two Digital Transformations

The prevailing mindset was that of “Don’t touch what isn’t broken”. This approach, though seemingly practical, reflected a deeper inertia, rooted in a cash-strapped culture and leadership priorities that often leaned towards prestige over progress. Over the years, the organization had acquired others in an attempt to grow its customer base. These mergers and acquisitions lead to inheritance of a lot more legacy estate. The mess burgeoned to an extent that they needed a transformation, not now, but yesterday! That is exactly where the Enterprise Architecture practice comes into picture. Strategically, a green field approach was suggested. A brand-new system from scratch, that has modern data centers for the infrastructure, cloud platforms for the applications, plug and play architecture or composable architecture as it is better known, for technology, unified yet diversified multi-branding under one umbrella and the whole works. Where things slowly started taking a downhill turn is when they decided to “outsource” the entire development of this new and shiny platform to a vendor. The reasoning was that the organization did not want to diversify from being a banking institution and turn into an IT heavy organization. They sought experienced engineering teams who could hit the ground running and deliver in 2 years flat.


Cloud security in multi-tenant environments

The most useful security strategy in a multi-tenant cloud environment comes from cultivating a security-first culture. It is important to educate the team on the intricacies of the cloud security system, implementing stringent password and authentication policies, thereby promoting secure practices for development. Security teams and company executives may reduce the possible effects of breaches and remain ready for changing threats with the support of event simulations, tabletop exercises, and regular training. ... As we navigate the evolving landscape of enterprise cloud computing, multi-tenant environments will undoubtedly remain a cornerstone of modern IT infrastructure. However, the path forward demands more than just technological adaptation – it requires a fundamental shift in how we approach security in shared spaces. Organizations must embrace a comprehensive defense-in-depth strategy that transcends traditional boundaries, encompassing everything from robust infrastructure hardening to sophisticated application security and meticulous user governance. The future of cloud computing need not present a binary choice between efficiency and security. ... By placing security at the heart of multi-tenant operations, organizations can fully harness the transformative power of cloud technology while protecting their most critical assets 


This Big Data Lesson Applies to AI

Bill Schmarzo was one of the most vocal supporters of the idea that there were no silver bullets, and that successful business transformation was the result of careful planning and a lot of hard work. A decade ago, the “Dean of Big Data” let this publication in on secret recipe he would use to guide his clients. He called it the SAM test, and it allowed business leaders to gauge the viability of new IT projects through three lenses.First, is the new project strategic? That is, will it make a big difference for the company? If it won’t, why are you investing lots of money? Second, is the proposed project actionable? You might be able to get some insight with the new tech, but can your business actually do anything with it? Third, is the project material? The new project might technically be feasible, but if the costs outweigh the benefits, then it’s a failure. Schmarzo, who is currently working as Dell’s Customer AI and Data Innovation Strategist, was also a big proponent of the importance of data governance and data management. The same data governance and data management bugaboos that doomed so many big data projects are, not surprisingly, raising their ugly little heads in the age of AI. Which brings us to the current AI hype wave. We’re told that trillions of dollars are on the line with large language models, that we’re on the cusp of a technological transformation the likes of which we have never seen. 


Sovereign cloud and digital public infrastructure: Building India’s AI backbone

India’s Digital Public Infrastructure (DPI) is an open, interoperable platform that powers essential services like identity and payments. It comprises foundational systems that are accessible, secure, and support seamless integration. In practice, this has taken shape as the famous “India Stack.” ... India’s digital economy is on an exciting trajectory. A large slice of that will be AI-driven services like smart agriculture, precision health, financial inclusion, and more. But to fully capitalize on this opportunity, we need both rich data and trusted compute. DPI provides vast amounts of structured data (financial records, IDs, health info) and access channels. Combining that with a sovereign cloud means we can turn data into insight on Indian soil. Indian regulators now view data itself as a strategic asset and fuel for AI. AI pilots (e.g., local-language advisory bots) are already being built on top of DPI platforms (UPI, ONDC, etc.) to deliver inclusive services. And the government has even subsidized thousands of GPUs for researchers. But all this computing and data must be hosted securely. If our AI models and sensitive datasets live on foreign soil, we remain vulnerable to geopolitical shifts and export controls. ... Now, policy is catching up with sovereignty. In 2023, the new Digital Personal Data Protection (DPDP) Act formally mandated local storage for sensitive personal data. 

Daily Tech Digest - June 24, 2025


Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal


Why Agentic AI Is a Developer's New Ally, Not Adversary

Because agentic AI can complete complex workflows rather than simply generating content, it opens the door to a variety of AI-assisted use cases in software development that extend far beyond writing code — which, to date, has been the main way that software developers have leveraged AI. ... But agentic AI eliminates the need to spell out instructions or carry out manual actions entirely. With just a sentence or two, developers can prompt AI to perform complex, multi-step tasks. It's important to note that, for the most part, agentic AI use cases like those described above remain theoretical. Agentic AI remains a fairly new and quickly evolving field. The technology to do the sorts of things mentioned here theoretically exists, but existing tool sets for enabling specific agentic AI use cases are limited. ... It's also important to note that agentic AI poses new challenges for software developers. One is the risk that AI will make the wrong decisions. Like any LLM-based technology, AI agents can hallucinate, causing them to perform in undesirable ways. For this reason, it's tough to imagine entrusting high-stakes tasks to AI agents without requiring a human to supervise and validate them. Agentic AI also poses security risks. If agentic AI systems are compromised by threat actors, any tools or data that AI agents can access (such as source code) could also be exposed.


Modernizing Identity Security Beyond MFA

The next phase of identity security must focus on phishing-resistant authentication, seamless access, and decentralized identity management. The key principle guiding this transformation is a principle of phishing resistance by design. The adoption of FIDO2 and WebAuthn standards enables passwordless authentication using cryptographic key pairs. Because the private key never leaves the user’s device, attackers cannot intercept it. These methods eliminate the weakest link — human error — by ensuring that authentication remains secure even if users unknowingly interact with malicious links or phishing campaigns. ... By leveraging blockchain-based verified credentials — digitally signed, tamper-evident credentials issued by a trusted entity — wallets enable users to securely authenticate to multiple resources without exposing their personal data to third parties. These credentials can include identity proofs, such as government-issued IDs, employment verification, or certifications, which enable strong authentication. Using them for authentication reduces the risk of identity theft while improving privacy. Modern authentication must allow users to register once and reuse their credentials seamlessly across services. This concept reduces redundant onboarding processes and minimizes the need for multiple authentication methods. 


The Pros and Cons of Becoming a Government CIO

Seeking a job as a government CIO offers a chance to make a real impact on the lives of citizens, says Aparna Achanta, security architect and leader at IBM Consulting -- Federal. CIOs typically lead a wide range of projects, such as upgrading systems in education, public safety, healthcare, and other areas that provide critical public services. "They [government CIOs] work on large-scale projects that benefit communities beyond profits, which can be very rewarding and impactful," Achanta observed in an online interview. "The job also gives you an opportunity for leadership growth and the chance to work with a wide range of departments and people." ... "Being a government CIO might mean dealing with slow processes and bureaucracy," Achanta says. "Most of the time, decisions take longer because they have to go through several layers of approval, which can delay projects.” Government CIOs face unique challenges, including budget constraints, a constantly evolving mission, and increased scrutiny from government leaders and the public. "Public servants must be adept at change management in order to be able to pivot and implement the priorities of their administration to the best of their ability," Tamburrino says. Government CIOs are often frustrated by a hierarchy that runs at a far slower pace than their enterprise counterparts.


Why work-life balance in cybersecurity must start with executive support

Watching your mental and physical health is critical. Setting boundaries is something that helps the entire team, not just as a cyber leader. One rule we have in my team is that we do not use work chat after business hours unless there are critical events. Everyone needs a break and sometimes hearing a text or chat notification can create undue stress. Another critical aspect of being a cybersecurity professional is to hold to your integrity. People often do not like the fact that we have to monitor, report, and investigate systems and human behavior. When we get pushback for this with unprofessional behavior or defensiveness, it can often cause great personal stress. ... Executive leadership plays one of the most critical roles in supporting the CISO. Without executive level support, we would be crushed by the demands and the frequent conflicts of interest we experience. For example, project managers, CIOs, and other IT leadership roles might prioritize budget, cost, timelines, or other needs above security. A security professional prioritizes people (safety) and security above cost or timelines. The nature of our roles requires executive leadership support to balance the security and privacy risk (and what is acceptable to an executive). I think in several instances the executive board and CEOs understand this, but we are still a growing profession and there needs to be more education in this area.


Building Trust in Synthetic Media Through Responsible AI Governance

Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy. This creates a paradox: inaccurate labels may legitimize harmful media, while unlabelled content may appear trustworthy. Moreover, users may not view basic AI edits, such as color correction, as manipulation, while opinions differ on changes like facial adjustments or filters. It remains unclear whether simple colour changes require a label, or if labeling should only occur when media is substantively altered or generated using AI. Similarly, many synthetic media artifacts may not fit the standard definition of pornography, such as images showing white substances on a person’s face; however, they can often be humiliating. ... Second, synthetic media use cases exist on a spectrum, and the presence of mixed AI- and human-generated content adds complexity and uncertainty in moderation strategies. For example, when moderating human-generated media, social media platforms only need to identify and remove harmful material. In the case of synthetic media, it is often necessary to first determine whether the content is AI-generated and then assess its potential harm. This added complexity may lead platforms to adopt overly cautious approaches to avoid liability. These challenges can undermine the effectiveness of labeling.


How future-ready leadership can power business value

Leadership in 2025 requires more than expertise; it demands adaptability, compassion, and tech fluency. “Leadership today isn’t about having all the answers; it’s about creating an environment where teams can sense, interpret, and act with speed, autonomy, and purpose,” said Govind. As the learning journey of Conduent pivots from stabilization to growth, he shared that the leaders need to do two key things in the current scenario: be human-centric and be digitally fluent. Similarly, Srilatha highlighted a fundamental shift happening among the leaders: “Leaders today must lead with both compassion and courage while taking tough decisions with kindness.” She also underlined the rising importance of the three Rs in modern leadership: Reskilling, resilience, and rethinking. ... Govind pointed to something deceptively simple: acting on feedback. “We didn’t just collect feedback, we analyzed sentiment, made changes, and closed the loop. That made stakeholders feel heard.” This approach led Conduent to experiment with program duration, where they went from 12 to 8 to 6 months.’ “Learning is a continuum, not a one-off event,” Govind added. ... Leadership development is no longer optional or one-size-fits-all. It’s a business imperative—designed around human needs and powered by digital fluency.


The CISO’s 5-step guide to securing AI operations

As AI applications extend to third parties, CISOs will need tailored audits of third-party data, AI security controls, supply chain security, and so on. Security leaders must also pay attention to emerging and often changing AI regulations. The EU AI Act is the most comprehensive to date, emphasizing safety, transparency, non-discrimination, and environmental friendliness. Others, such as the Colorado Artificial Intelligence Act (CAIA), may change rapidly as consumer reaction, enterprise experience, and legal case law evolves. CISOs should anticipate other state, federal, regional, and industry regulations. ... Established secure software development lifecycles should be amended to cover things such as AI threat modeling, data handling, API security, etc. ... End user training should include acceptable use, data handling, misinformation, and deepfake training. Human risk management (HRM) solutions from vendors such as Mimecast may be necessary to keep up with AI threats and customize training to different individuals and roles. ... Simultaneously, security leaders should schedule roadmap meetings with leading security technology partners. Come to these meetings prepared to discuss specific needs rather than sit through pie-in-the-sky PowerPoint presentations. CISOs should also ask vendors directly about how AI will be used for existing technology tuning and optimization. 


State of Open Source Report Reveals Low Confidence in Big Data Management

"Many organizations know what data they are looking for and how they want to process it but lack the in-house expertise to manage the platform itself," said Matthew Weier O'Phinney, Principal Product Manager at Perforce OpenLogic. "This leads to some moving to commercial Big Data solutions, but those that can't afford that option may be forced to rely on less-experienced engineers. In which case, issues with data privacy, inability to scale, and cost overruns could materialize." ... EOL operating system, CentOS Linux, showed surprisingly high usage, with 40% of large enterprises still using it in production. While CentOS usage declined in Europe and North America in the past year, it is still the third most used Linux distribution overall (behind Ubuntu and Debian), and the top distribution in Asia. For teams deploying EOL CentOS, 83% cited security and compliance as their biggest concern around their deployments. ... "Open source is the engine driving innovation in Big Data, AI, and beyond—but adoption alone isn't enough," said Gael Blondelle, Chief Membership Officer of the Eclipse Foundation. "To unlock its full potential, organizations need to invest in their people, establish the right processes, and actively contribute to the long-term sustainability and growth of the technologies they depend on."


Cybercrime goes corporate: A trillion-dollar industry undermining global security

The CaaS market is a booming economy in the shadows, driving annual revenues into billions. While precise figures are elusive due to its illicit nature, reports suggest it's a substantial and growing market. CaaS contributes significantly, and the broader cybersecurity services market is projected to reach hundreds of billions of dollars in the coming years. If measured as a country, cybercrime would already be the world's third-largest economy, with projected annual damages reaching USD 10.5 trillion by 2025, as per some cybersecurity ventures. This growth is fueled by the same principles that drive legitimate businesses: specialisation, efficiency, and accessibility. CaaS platforms function much like dark online marketplaces. They offer pre-made hacking kits, phishing templates, and even access to already compromised computer networks. These services significantly lower the entry barrier for aspiring criminals. ... Enterprises must recognise that attackers often hit multiple systems simultaneously—computers, user identities, and cloud environments. This creates significant "noise" if security tools operate in isolation. Relying on many disparate security products makes it difficult to gain a holistic view and understand that seemingly separate incidents are often part of a single, coordinated attack.


Modern apps broke observability. Here’s how we fix it.

For developers, figuring out where things went wrong is difficult. In a survey looking at the biggest challenges to observability, 58% of developers said that identifying blind spots is a top concern. Stack traces may help, but they rarely provide enough context to diagnose issues quickly; developers chase down screenshots, reproduce problems, and piece together clues manually using the metric and log data from APM tools; a bug that could take 30 minutes to fix ends up consuming days or weeks. Meanwhile, telemetry data accumulates in massive volumes—expensive to store and hard to interpret. Without tools to turn data into insight, you’re left with three problems: high bills, burnout, and time wasted fixing bugs—bugs that don’t have a major impact on core business functions or drive revenue when increasing developer efficiency is a top strategic goal at organizations. ... More than anything, we need a cultural change. Observability must be built into products from the start. That means thinking early about how we’ll track adoption, usage, and outcomes—not just deliver features. Too often, teams ship functionality only to find no one is using it. Observability should show whether users ever saw the feature, where they dropped off, or what got in the way. That kind of visibility doesn’t come from backend logs alone.

Daily Tech Digest - April 02, 2025


Quote for the day:

"People will not change their minds but they will make new decisions based upon new information." -- Orrin Woodward


The smart way to tackle data storage challenges

Data intelligence makes data stored on the X10000 ready for AI applications to use as soon as they are ingested. The company has a demo of this, where the X10000 ingests customer support documents and enables users to instantly ask it relevant natural language questions via a locally hosted version of the DeepSeek LLM. This kind of application wouldn’t be possible with low-speed legacy object storage, says the company. The X10000’s all-NVMe storage architecture helps to support low-latency access to this indexed and vectorized data, avoiding front-end caching bottlenecks. Advances like these provide up to 6x faster performance than the X10000’s leading object storage competitors, according to HPE’s benchmark testing. ... The containerized architecture opens up options for inline and out-of-band software services, such as automated provisioning and life cycle management of storage resources. It is also easier to localize a workload’s data and compute resources, minimizing data movement by enabling workloads to process data in place rather than moving it to other compute nodes. This is an important performance factor in low-latency applications like AI training and inference. Another aspect of container-based workloads is that all workloads can interact with the same object storage layer. 


Talent gap complicates cost-conscious cloud planning

The top strategy so far is what one enterprise calls the “Cloud Team.” You assemble all your people with cloud skills, and your own best software architect, and have the team examine current and proposed cloud applications, looking for a high-level approach that meets business goals. In this process, the team tries to avoid implementation specifics, focusing instead on the notion that a hybrid application has an agile cloud side and a governance-and-sovereignty data center side, and what has to be done is push functionality into the right place. ... To enterprises who tried the Cloud Team, there’s also a deeper lesson. In fact, there are two. Remember the old “the cloud changes everything” claim? Well, it does, but not the way we thought, or at least not as simply and directly as we thought. The economic revolution of the cloud is selective, a set of benefits that has to be carefully fit to business problems in order to deliver the promised gains. Application development overall has to change, to emphasize a strategic-then-tactical flow that top-down design always called for but didn’t always deliver. That’s the first lesson. The second is that the kinds of applications that the cloud changes the most are applications we can’t move there, because they never got implemented anywhere else.


Your smart home may not be as secure as you think

Most smart devices rely on Wi-Fi to communicate. If these devices connect to an unsecured or poorly protected Wi-Fi network, they can become an easy target. Unencrypted networks are especially vulnerable, and hackers can intercept sensitive data, such as passwords or personal information, being transmitted from the devices. ... Many smart devices collect personal data—sometimes more than users realize. Some devices, like voice assistants or security cameras, are constantly listening or recording, which can lead to privacy violations if not properly secured. In some cases, manufacturers don’t encrypt or secure the data they collect, making it easier for malicious actors to exploit it. ... Smart home devices often connect to third-party platforms or other devices. These integrations can create security holes if the third-party services don’t have strong protections in place. A breach in one service could give attackers access to an entire smart home ecosystem. To mitigate this risk, it’s important to review the security practices of any third-party service before integrating it with your IoT devices. ... If your devices support it, always enable 2FA and link your accounts to a reliable authentication app or your mobile number. You can use 2FA with smart home hubs and cloud-based apps that control IoT devices.


Beyond compensation—crafting an experience that retains talent

Looking ahead, the companies that succeed in attracting and retaining top talent will be those that embrace innovation in their Total Rewards strategies. AI-driven personalization is already changing the game—organizations are using AI-powered platforms to tailor benefits to individual employee needs, offering a menu of options such as additional PTO, learning stipends, or wellness perks. Similarly, equity-based compensation models are evolving, with some businesses exploring cryptocurrency-based rewards and fractional ownership opportunities. Sustainability is also becoming a key factor in Total Rewards. Companies that incorporate sustainability-linked incentives, such as carbon footprint reduction rewards or volunteer days, are seeing higher engagement and satisfaction levels. ... Total Rewards is no longer just about compensation—it’s about creating an ecosystem that supports employees in every aspect of their work and life. Companies that adopt the VALUE framework—Variable pay, Aligned well-being benefits, Learning and growth opportunities, Ultimate flexibility, and Engagement-driven recognition—will not only attract top talent but also foster long-term loyalty and satisfaction.


Bridging the Gap Between the CISO & the Board of Directors

Many executives, including board members, may not fully understand the CISO's role. This isn't just a communications gap; it's also an opportunity to build relationships across departments. When CISOs connect security priorities to broader business goals, they show how cybersecurity is a business enabler rather than just an operational cost. ... Often, those in technical roles lack the ability to speak anything other than the language of tech, making it harder to communicate with board members who don't hold tech or cybersecurity expertise. I remember presenting to our board early into my CISO role and, once I was done, seeing some blank stares. The issue wasn't that they didn't care about what I was saying; we just weren't speaking the same language. ... There are many areas in which communication between a board and CISO is important — but there may be none more important than compliance. Data breaches today are not just technical failures. They carry significant legal, financial, and reputational consequences. In this environment, regulatory compliance isn't just a box to check; it's a critical business risk that CISOs must manage, particularly as boards become more aware of the business impact of control failures in cybersecurity.


What does a comprehensive backup strategy look like?

Though backups are rarely needed, they form the foundation of disaster recovery. Milovan follows the classic 3-2-1 rule: three data copies, on two different media types, with one off-site copy. He insists on maintaining multiple copies “just in case.” In addition, NAS users need to update their OS regularly, Synology’s Alexandra Bejan says. “Outdated operating systems are particularly vulnerable there.” Bejan emphasizes the positives from implementing the textbook best practices Ichthus employs. ... One may imagine that smaller enterprises make for easier targets due to their limited IT. However, nothing could be further from the truth. Bejan: “We have observed that the larger the enterprise, the more difficult it is to implement a comprehensive data protection strategy.” She says the primary reason for this lies in the previously fragmented investments in backup infrastructure, where different solutions were procured for various workloads. “These legacy solutions struggle to effectively manage the rapidly growing number of workloads and the increasing data size. At the same time, they require significant human resources for training, with steep learning curves, making self-learning difficult. When personnel are reassigned, considerable time is needed to relearn the system.”


Malicious actors increasingly put privileged identity access to work across attack chains

Many of these credentials are extracted from computers using so-called infostealer malware, malicious programs that scour the operating system and installed applications for saved usernames and passwords, browser session tokens, SSH and VPN certificates, API keys, and more. The advantage of using stolen credentials for initial access is that they require less skill compared to exploiting vulnerabilities in publicly facing applications or tricking users into installing malware from email links or attachments — although these initial access methods remain popular as well. ... “Skilled actors have created tooling that is freely available on the open web, easy to deploy, and designed to specifically target cloud environments,” the Talos researchers found. “Some examples include ROADtools and AAAInternals, publicly available frameworks designed to enumerate Microsoft Entra ID environments. These tools can collect data on users, groups, applications, service principals, and devices, and execute commands.” These are often coupled with techniques designed to exploit the lack of MFA or incorrectly configured MFA. For example, push spray attacks, also known as MFA bombing or MFA fatigue, rely on bombing the user with MFA push notifications on their phones until they get annoyed and approve the login thinking it’s probably the system malfunctioning.


Role of Blockchain in Enhancing Cybersecurity

At its core, a blockchain is a distributed ledger in which each data block is cryptographically connected to its predecessor, forming an unbreakable chain. Without network authorization, modifying or removing data from a blockchain becomes exceedingly difficult. This ensures that conventional data records stay consistent and accurate over time. The architectural structure of blockchain plays a critical role in protecting data integrity. Every single transaction is time-stamped and merged into a block, which is then confirmed and sealed through consensus. This process provides an undeniable record of all activities, simplifying audits and boosting confidence in system reliability. Similarly, blockchain ensures that every financial transaction is correctly documented and easily accessible. This innovation helps prevent record manipulation, double-spending, and other forms of fraud. By combining cryptographic safeguards with a decentralized architecture, it offers an ideal solution to information security. It also significantly reduces risks related to data breaches, hacking, and unauthorized access in the digital realm. Furthermore, blockchain strengthens cybersecurity by addressing concerns about unauthorized access and the rising threat of cyberattacks. 


Thriving in the Second Wave of Big Data Modernization

When businesses want to use big data to power AI solutions – as opposed to the more traditional types of analytics workloads that predominated during the first wave of big data modernization–the problems stemming from poor data management snowball. They transform from mere annoyances or hindrances into show stoppers. ... But in the age of AI, this process would likely instead entail giving the employee access to a generative AI tool that can interpret a question formulated using natural language and generate a response based on the organizational data that the AI was trained on. In this case, data quality or security issues could become very problematic. ... Unfortunately, there is no magic bullet that can cure the types of issues I’ve laid out above. A large part of the solution involves continuing to do the hard work of improving data quality, erecting effective access controls and making data infrastructure even more scalable. As they do these things, however, businesses must pay careful attention to the unique requirements of AI use cases. For example, when they create security controls, they must do so in ways that are recognizable to AI tools, such that the tools will know which types of data should be accessible to which users.


The DevOps Bottleneck: Why IaC Orchestration is the Missing Piece

At the end of the day, instead of eliminating operational burdens, many organizations just shifted them. DevOps, SREs, CloudOps—whatever you call them—these teams still end up being the gatekeepers. They own the application deployment pipelines, infrastructure lifecycle management, and security policies. And like any team, they seek independence and control—not out of malice, but out of necessity. Think about it: If your job is to keep production stable, are you really going to let every dev push infrastructure changes willy-nilly? Of course not. The result? Silos of unique responsibility and sacred internal knowledge. The very teams that were meant to empower developers become blockers instead. ... IaC orchestration isn’t about replacing your existing tools; it’s about making them work at scale. Think about how GitHub changed software development. Version control wasn’t new—but GitHub made it easier to collaborate, review code, and manage contributions without stepping on each other’s work. That’s exactly what orchestration does for IaC. It allows large teams to manage complex infrastructure without turning into a bottleneck. It enforces guardrails while enabling self-service for developers. 

Daily Tech Digest - March 20, 2025


Quote for the day:

"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford



Agentic AI — What CFOs need to know

Agentic AI takes efficiency to the next level as it builds on existing AI platforms with human-like decision-making, relieving employees of monotonous routine tasks, allowing them to focus on more important work. CFOs will be happy to know that like other forms of AI, agentic is scalable and flexible. For example, organizations can build it into customer-facing applications for a highly customized experience or sophisticated help desk. Or they could embed agentic AI behind the scenes in operations. ... Not surprisingly, like other emerging technologies, agentic AI requires thoughtful and strategic implementation. This means starting with process identification and determining which specific process or functions are suitable for agentic AI. Business leaders also need to determine organizational value and impact and find ways to evaluate and measure to ensure the technology is delivering clear benefits. Companies should also be mindful of team composition, and, if necessary, secure external experts to ensure successful implementation. Beyond the technical feasibility, there are other considerations such as data security. For now, CFOs and other business leaders need to wrap their heads around the concept of “agents” and keep their minds open to how this powerful technology can best serve the needs of their organization. 


5 pitfalls that can delay cyber incident response and recovery

For tabletop exercises to be truly effective they must have internal ownership and be customized to the organization. CISOs need to ensure that tabletops are tailored to the company’s specific risks, security use cases and compliance requirements. Exercises should be run regularly (quarterly, at a minimum) and evaluated with a critical eye to ensure that outcomes are reflected in the company’s broader incident response plan. ... One of the most common failures in incident response is a lack of timely information sharing. Key stakeholders, including HR, PR, Legal, executives and board members must be kept informed about the situation in real time. Without proper communication channels and predefined reporting structures, misinformation or delays can lead to confusion, prolonged downtime and even regulatory penalties for failure to report incidents within required timeframes. CISOs are responsible for proactively establishing clear communication protocols and ensuring that all responders and stakeholders understand their role in incident management. ... Out-of-band communication capabilities are critical for safeguarding response efforts and shielding them from an attacker’s view. Organizations should establish secure, independent channels for coordinating incident response that aren’t tied to corporate networks. 


Bringing Security to Digital Product Design

We are aware that prioritizing security is a common challenge. Even though it is a critical issue, most leaders behind the development of new products are not interested in prioritizing this type of matter. Whenever possible, they try to focus the team's efforts on features. For this reason, there is often no room for this type of discussion. So what should we do? Fortunately, there are multiple possible solutions. One way to approach the topic is to take advantage of the opportunity of a collaborative and immersive session such as product discovery. ... Usually, in a product discovery session, there is a proposed activity to map personas. To map this kind of behavior, I recommend using the same persona model that is suggested. From there, go deeper into hostility characteristics in sections such as bio, objectives, interests, and frustrations, as in the figure above. After the personas have been described, it is important to deepen the discussion by mapping journeys. The goal here is to identify actions and behaviors that provide ideas on how to correctly deal with threats. Remember that when using an assailant actor, the materials should be written from its perspective. ... Complementing the user journey with likely attacker actions is another technique that helps software development teams map, plan, and address security as early as possible. 


From Cloud Native to AI Native: Lessons for the Modern CISO to Win the Cybersecurity Arms Race

Today, CISOs stand at another critical crossroads in security operations: the move from a “Traditional SOC” to an “AI Native SOC.” In this new reality, generative AI, machine learning and large-scale data analytics power the majority of the detection, triage and response tasks once handled by human analysts. Like Cloud Native technology before it, AI Native security methods promise profound efficiency gains but also necessitate a fundamental shift in processes, skillsets and organizational culture.  ... For CISOs, transitioning to an AI Native SOC represents a massive opportunity—akin to how CIOs leveraged DevOps and cloud-native to gain a competitive edge:  Strategic Perspective: CISOs must look beyond tool selection to organizational and cultural shifts. By championing AI-driven security, they demonstrate a future-ready mindset—one that’s essential for keeping up with advanced adversaries and board-level expectations around cyber resilience.  Risk Versus Value Equation: Cloud-native adoption taught CIOs that while there are upfront investments and skill gaps, the long-term benefits—speed, agility, scalability—are transformative. In AI Native security, the same holds true: automation reduces response times, advanced analytics detect sophisticated threats and analysts focus on high-value tasks.  


Europe slams the brakes on Apple innovation in the EU

With its latest Digital Markets Act (DMA) action against Apple, the European Commission (EC) proves it is bad for competition, bad for consumers, and bad for business. It also threatens Europeans with a hitherto unseen degree of data insecurity and weaponized exploitation. The information Apple is being forced to make available to competitors with cynical interest in data exfiltration will threaten regional democracy, opening doors to new Cambridge Analytica scandals. This may sound histrionic. And certainly, if you read the EC’s statement detailing its guidance to “facilitate development of innovative products on Apple’s platforms” you’d almost believe it was a positive thing. ... Apple isn’t at all happy. In a statement, it said: “Today’s decisions wrap us in red tape, slowing down Apple’s ability to innovate for users in Europe and forcing us to give away our new features for free to companies who don’t have to play by the same rules. It’s bad for our products and for our European users. We will continue to work with the European Commission to help them understand our concerns on behalf of our users.” There are several other iniquitous measures contained in Europe’s flawed judgement. For example, Apple will be forced to hand over access to innovations to competitors for free from day one, slowing innovation. 


The Impact of Emotional Intelligence on Young Entrepreneurs

The first element of emotional intelligence is self-awareness which means being able to identify your emotions as they happen to understand how they affect your behavior. During the COVID-19 pandemic, I often felt frustrated when my sales went down during the international bookfair. But by practicing self-awareness, I was able to acknowledge the frustration and think about its sources instead of letting it lead to impulsive reactions. Being self-aware helps me to stay in control of  actions and make decisions that align with my values. So the solution back then was to keep pushing sales through my online platform instead of showing up in person as I realized that people were still in lockdown due to the pandemic.   Self-recognition is another important aspect of emotional intelligence. While self-awareness is about recognizing emotions, self-regulation focuses on managing how you respond to them. Self-regulation doesn't mean ignoring your emotions but learning to express them in a constructive way. Imagine a situation where you feel angry after receiving negative feedback. Instead of reacting defensively or shouting, self-recognition allows you to take a step back, consider the feedback calmly, and respond appropriately. 


Bridging the Gap: Integrating All Enterprise Data for a Smarter Future

To bridge the gap between mainframe and hybrid cloud environments, businesses need a modern, flexible, technology-driven strategy — one that ensures they can access, analyze, and act on their data without disruption. Rather than relying on costly, high-risk "rip-and-replace" modernization efforts, organizations can integrate their core transactional data with modern cloud platforms using automated, secure, and scalable solutions capable of understanding and modernizing mainframe data. One of the most effective methods is real-time data replication and synchronization, which enables mainframe data to be continuously updated in hybrid cloud environments in real time. Low-impact change data capture technology recognizes and replicates only the modified portions of datasets, reducing processing overhead and ensuring real-time consistency across both mainframe and hybrid cloud systems. Another approach is API-based integration, which allows organizations to provide mainframe data as modern, cloud-compatible services. This eliminates the need for batch processing and enables cloud-native applications, AI models, and analytics platforms to access real-time mainframe data on demand. API gateways further enhance security and governance, ensuring only authorized systems can interact with sensitive transactional business data.


How CISOs are approaching staffing diversity with DEI initiatives under pressure

“In the end, a diverse, engaged cybersecurity team isn’t just the right thing to build — it’s critical to staying ahead in a rapidly evolving threat landscape,” he says. “To fellow CISOs, I’d say: Stay the course. The adversary landscape is global, and so our perspective should be as well. A commitment to DEI enhances resilience, fosters innovation, and ultimately strengthens our defenses against threats that know no boundaries.” Nate Lee, founder and CISO at Cloudsec.ai, says that even if DEI isn’t a specific competitive advantage — although he thinks diversity in many shapes is — it’s the right thing to do, and “weaponizing it the way the administration has is shameful.” “People want to work where they’re valued as individuals, not where diversity is reduced to checking boxes, but where leadership genuinely cares about fostering an inclusive environment,” he says. “The current narrative tries to paint efforts to boost people up as misguided and harmful, which to me is a very disingenuous argument.” ... “Diverse workforces make you stronger and you are a fool if you [don’t] establish a diverse workforce in cybersecurity. You are at a distinct disadvantage to your adversaries who do benefit from diverse thinking, creativity, and motivations.”


AI-Powered Cyber Attacks and Data Privacy in The Age of Big Data

Artificial intelligence significantly increased the capabilities of attackers to efficiently conduct cyber-attacks. This also increased their intelligence and the scale of the attacks. Compared to the traditional process of cyber-attacks, the attacks driven by AI have the capability to automatically learn, adapt, and develop strategies with a minimum number of human interventions. These attacks proactively utilize the algorithms of machine learning, natural language processing, and deep learning models. They leverage these algorithms in the process of determining and analyzing issues or vulnerabilities, avoiding security and detection systems, and developing phishing campaigns that are believable. ... AI has also significantly increased the intelligence of systems related to malware and autonomous hacking. These systems gained the capabilities to infiltrate networks, leverage the vulnerabilities of the system, and avoid detection systems. Malware driven by AI has the capability to make real-time modifications to its codes, unlike conventional malware. This significantly increases the difficulties in the detection and eradication process for the security software. These difficulties involve infiltration in systems powered by AI, such as polymorphic malware. It can convert its appearance based on the data collected from every attempt of cyber-attack. 


Platform Engineers Must Have Strong Opinions

Many platform engineering teams build internal developer platforms, which allow development teams to deploy their infrastructure with just a few clicks and reduce the number of issues that slow deployments. Because they are designing the underlying application infrastructure across the organization, the platform engineering team must have a strong understanding of their organization and the application types their developers are creating. This is also an ideal point to inject standards about security, data management, observability and other structures that make it easier to manage and deploy large code bases.  ... To build a successful platform engineering strategy, a platform engineering team must have well-defined opinions about platform deployments. Like pizza chefs building curated pizza lists based on expertise and years of pizza experience, the platform engineering team applies its years of industry experience in deploying software to define software deployments inside the organization. The platform engineering team’s experience and opinions guide and shape the underlying infrastructure of internal platforms. They put guardrails into deployment standards to ensure that the provided development capabilities meet the needs of engineering organizations and fulfill the larger organization’s security, observability and maintainability needs.

Dily Tech Digest - December 14, 2024

How Conscious Unbossing Is Reshaping Leadership And Career Growth

Conscious unbossing presents both challenges and opportunities for organizations. On the one hand, fewer employees pursuing traditional leadership tracks can create gaps in decision-making, team development, and operational consistency. On the other hand, organizations that embrace unbossing as a cultural strategy can thrive. Novartis is a prime example, fostering a culture of curiosity and empowerment that drives both engagement and innovation. By breaking down rigid hierarchies, they’ve shown how unbossed leadership can be a strategic advantage rather than a liability. ... Conscious unbossing is transforming how we think about leadership and career progression. Organizations that adapt by redefining leadership roles, offering flexible career pathways, and building cultures rooted in curiosity and empathy will thrive. Companies like Novartis, Patagonia, and Microsoft have proven that unbossed leadership isn’t a limitation—it’s an opportunity to innovate and grow. By embracing this shift, businesses can create resilient, dynamic teams and ensure leadership continuity. However, this approach also comes with challenges that organizations must navigate to ensure its success. One potential downside is the risk of role ambiguity. 


Why agentic AI and AGI are on the agenda for 2025

We’re ready to move beyond basic now, and what we’re seeing is an evolution towards a digital co-worker – an agent. Agents are really those digital coworkers, our friends, that are going to help us to do research, write a text, and then publish it somewhere. So you set the goal – let’s say, run research on some telco and networking predictions for next year – and an agent would do the research and run it by you, and then push it to where it needs to go to get reviewed, edited, and more. You would provide it with an outcome, and it will choose the best path to get to that outcome. Right now, Chatbots are really an enhanced search engine with creative flair. But Agentic AI is the next stage of evolution, and will be used across enterprises as early as next year. This will require increased network bandwidth and deterministic connectivity, with compute closer to users – but these essentials are already being rolled out as we speak, ensuring Agentic AI is firmly on the agenda for enterprises in the new year. ... Amid the AI rush, we’ve been focused on the outcomes rather than the practicalities of how we’re accessing and storing the data being generated. But concerns are emerging. Where does the data go? Does it disappear in a big cloud? Concerns are obviously being raised in many sectors, particularly in the medical space in which, medical records cannot leave state/national borders. 


Robust Error Detection to Enable Commercial Ready Quantum Computers from Quantum Circuits

Quantum Circuits has the goal of first making components that are correct and then scaling the systems. This is part of the larger goal of making commercial ready quantum computers. What is meant by commercial ready quantum computers ? This means you can bet your business or company on the results of a quantum computer. Just as we rely today on servers and computers than provide services via cloud computer systems. Being able trust and rely on quantum computers means systems that are repeatable, predictable and trusted. They have built an 8 qubit system and enterprise customers have been using them. Customers have said that using error mitigation and error detection can enable them to get far more utility from Quantum Circuits than competing quantum computers. Error suppression and error mitigation are common techniques and have intensive efforts by most quantum computer companies and the entire Quantum computer community. Quantum Circuits’ error-detecting dual-rail qubits innovation allows errors to be detected and corrected first to avoid disrupting performance at scale. This system will enable a 10x reduction in resource requirements for scalable error correction.


5 reasons why Google's Trillium could transform AI and cloud computing - and 2 obstacles

Trillium is designed to deliver exceptional performance and cost savings, featuring advanced hardware technologies that set it apart from earlier TPU generations and competitors. Key innovations include doubled High Bandwidth Memory (HBM), which improves data transfer rates and reduces bottlenecks. Additionally, as part of its TPU system architecture, it incorporates a third-generation SparseCore that enhances computational efficiency by directing resources to the most important data paths. There is also a remarkable 4.7x increase in peak compute performance per chip, significantly boosting processing power. These advancements enable Trillium to tackle demanding AI tasks, providing a strong foundation for future developments and applications in AI. ... Trillium is not just a powerful TPU; it is part of a broader strategy that includes Gemini 2.0, an advanced AI model designed for the "agentic era," and Deep Research, a tool to streamline the management of complex machine learning queries. This ecosystem approach ensures that Trillium remains relevant and can support the next generation of AI innovations. By aligning Trillium with these advanced tools and models, Google is future-proofing its AI infrastructure, making it adaptable to emerging trends and technologies in the AI landscape.


How Industries Are Using AI Agents To Turn Data Into Decisions

In the past, this required hours of manual work to standardize the various file formats — such as converting PDFs to spreadsheets — and reconcile inconsistencies like differing terminologies for revenue or varying date formats. Today, AI agents automate these tasks with human supervision, adapting to schema changes dynamically and normalizing data as it comes in. ... While extracting insights is vital, the ultimate goal of any data workflow is to drive action. Historically, this has been the weakest link in the chain. Insights often remain in dashboards or reports, waiting for human intervention to trigger action. By the time decisions are made, the window of opportunity may already have closed. AI agents, with humans in the loop, are expediting the entire cycle by bridging the gap between analysis and execution. ... The advent of AI agents signals a new era in data management — one where workflows are no longer constrained by team bandwidth or static processes. By automating ETL, enabling real-time analysis and driving autonomous actions, these agents, with the right guardrails and human supervision, are creating dynamic systems that adapt, learn and improve over time.


The Power of Stepping Back: How Rest Fuels Leadership and Growth

It's essential to fully step back from work sometimes, especially when balancing the demands of running a business and being a parent. I find that I'm most energised and focused in the mornings, so I like to use that time to read, take notes, and reflect on different aspects of the business - whether it's strategy, growth, or new ideas. It's my creative time to think deeply and plan ahead. ... It's also important to carve out weekend days when I can fully switch off. This time away from the business helps me come back refreshed and with a clearer perspective. Even though I aim to disconnect, Lee (my husband and co-founder) and I often find ourselves discussing business because it's something we're both passionate about - strangely enough, those conversations don't feel like work. ... Stepping back from the day-to-day grind gave me the mental space to realise that while small tests have their place, they can sometimes limit your potential by encouraging cautious, safe moves. By contrast, thinking bigger and aiming for more ambitious goals has opened up a new level of creativity and opportunity. This shift in mindset has been a game-changer for us - it's unlocked several key growth areas, including new product opportunities and ways to engage with customers. 


Navigating the Future of Big Data for Business Success

Big data is no longer just a tool for competitive advantage – it has become the backbone of innovation and operational efficiency across key industries, driving billion-dollar transformations. ... The combination of artificial intelligence and big data, especially through machine learning (ML), is pushing the boundaries of what’s possible in data analysis. These technologies automate complex decision-making processes and uncover patterns that humans might miss. Google’s DeepMind AI, for instance, made a breakthrough in medical research by using data to predict protein folding, which is already speeding up drug discovery. ... Tech giants like Google and Facebook are increasing their data science teams by 20% annually, underscoring the essential role these experts play in unlocking actionable insights from vast datasets. This growing demand reflects the importance of data-driven decision-making across industries. ... AI and machine learning will also continue to revolutionize big data, playing a critical role in data-driven decision-making across industries. By 2025, AI is expected to generate $3.9 trillion in business value, with organizations leveraging these technologies to automate complex processes and extract valuable insights. 


Five Steps for Creating Responsible, Reliable, and Trustworthy AI

Model testing with human oversight is critically important. It allows data scientists to ensure the models they’ve built function as intended and root out any possible errors, anomalies, or biases. However, organizations should not rely solely on the acumen of their data scientists. Enlisting the input of business leaders who are close to the customers can help ensure that the models appropriately address customers’ needs. Being involved in the testing process also gives them a unique perspective that will allow them to explain the process to customers and alleviate their concerns.Be transparent Many organizations do not trust information from an opaque “black box.” They want to know how a model is trained and the methods it uses to craft its responses. Secrecy as to the model development and data computation processes will only serve to engender further skepticism in the model’s output. ... Continuous improvement might be the final step in creating trusted AI, but it’s just part of an ongoing process. Organizations must continue to capture, cultivate, and feed data into the model to keep it relevant. They must also consider customer feedback and recommendations on ways to improve their models. These steps form an essential foundation for trustworthy AI, but they’re not the only practices organizations should follow. 


With 'TPUXtract,' Attackers Can Steal Orgs' AI Models

The NCSU researchers used a Riscure EM probe station with a motorized XYZ table to scan the chip's surface, and a high sensitivity electromagnetic probe for capturing its weak radio signals. A Picoscope 6000E oscilloscope recorded the traces, Riscure's icWaves field-programmable gate array (FPGA) device aligned them in real-time, and the icWaves transceiver used bandpass filters and AM/FM demodulation to translate and filter out irrelevant signals. As tricky and costly as it may be for an individual hacker, Kurian says, "It can be a competing company who wants to do this, [and they could] in a matter of a few days. For example, a competitor wants to develop [a copy of] ChatGPT without doing all of the work. This is something that they can do to save a lot of money." Intellectual property theft, though, is just one potential reason anyone might want to steal an AI model. Malicious adversaries might also benefit from observing the knobs and dials controlling a popular AI model, so they can probe them for cybersecurity vulnerabilities. And for the especially ambitious, the researchers also cited four studies that focused on stealing regular neural network parameters. 


Artificial Intelligence Looms Large at Black Hat Europe

From a business standpoint, advances in AI are going to "make those predictions faster and faster, cheaper and cheaper," he said. Accordingly, "if I was in the business of security, I would try to make all of my problems prediction problems," so they could get solved by using prediction engines. What exactly these prediction problems might be remains an open question, although Zanero said other good use cases include analyzing code, and extracting information from unstructured text - for example, analyzing logs for cyberthreat intelligence purposes. "So it accelerates your investigation, but you still have to verify it," Moss said. "The verify part escapes most students," Zanero said. "I say that from experience." One verification challenge is AI often functions like a very complex, black box API, and people have to adapt their prompt to get the proper output, he said. The problem: that approach only works well when you know what the right answer should be, and can thus validate what the machine learning model is doing. "The real problematic areas in all machine learning - not just using LLMs - is what happens if you do not know the answer, and you try to get the model to give you knowledge that you didn't have before," Zanero said. "That's a deep area of research work."



Quote for the day:

"The only person you are destined to become is the person you decide to be." -- Ralph Waldo Emerson