Showing posts with label cyber hygiene. Show all posts
Showing posts with label cyber hygiene. Show all posts

Daily Tech Digest - July 01, 2025


Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett


CIOs rethink public cloud as AI strategies mature

Regulatory and compliance concerns are a big driver toward the private cloud or on-premises solutions, says Bastien Aerni, vice president of strategy and technology adoption at GTT. Many companies are shifting their sensitive workloads to private clouds as a piece of broader multicloud and hybrid strategies to support agentic AI and other complex AI initiatives, he adds. “Most of the time, AI is touching confidential data or business-critical data,” Aerni says. “Then the thinking about the architecture and what the workload should be public vs. private, or even on-prem, is becoming a true question.” The public cloud still provides maximum scalability for AI projects, and in recent years, CIOs have been persuaded by the number of extra capabilities available there, he says. “In some of the conversations I had with CIOs, let’s say five years ago, they were mentioning, ‘There are so many features, so many tools,’” Aerni adds. ... “The paradox is clear: AI workloads are driving both massive cloud growth and selective repatriation simultaneously, because the market is expanding so rapidly it’s accommodating multiple deployment models at once,” Kirschner says. “What we are seeing is the maturation from a naive ‘everything-to-the-cloud’ strategy toward intelligent, workload-specific decisions.”


India’s DPDP law puts HR under the microscope—Here’s why that’s a good thing

At first glance, DPDP appears to mirror other data privacy frameworks like GDPR or CCPA. There’s talk of consent, purpose limitation, secure storage, and rights of the data principal (i.e., the individual). But the Indian legislation’s implications ripple far beyond IT configurations or privacy policies. “Mention data protection, and it often gets handed off to the legal or IT teams,” says Gupta. “But that misses the point. Every team that touches personal data is responsible under this law.” For HR departments, this shift is seismic. Gupta underscores how HR sits atop a “goldmine” of personal information—addresses, Aadhaar numbers, medical history, performance reviews, family details, even biometric data in some cases. And this isn't limited to employees; applicants and former workers are also in scope. ... With India housing thousands of global capability centres and outsourcing hubs, DPDP challenges multinationals to look inward. The emphasis so far has been on protecting customer data under global laws like GDPR. But now, internal data practices—especially around employees—are under the scanner. “DPDP is turning the lens inward,” says Gupta. “If your GCC in India tightens data practices, it won’t make sense to be lax elsewhere.”


3 ways developers should rethink their data stack for GenAI success

Traditional data stacks optimized for analytics, for the most part, don’t naturally support the vector search and semantic retrieval patterns that GenAI applications require. Thus, real-time GenAI data architectures need native support for embedding generation and vector storage as first-class citizens. This could mean integrating data with vector databases like Pinecone, Weaviate, or Chroma as part of the core infrastructure. It may also mean searching for multi-modal databases that can support all of your required data types out of the box without needing a bunch of separate platforms. Regardless of the underlying infrastructure, plan for needing hybrid search capabilities that combine traditional keyword search with semantic similarity, and consider how you’ll handle embedding model updates and re-indexing. ... Maintaining data relationships and ensuring consistent access patterns across these different storage systems is the real challenge when working with these various data types. While some platforms are beginning to offer enhanced vector search capabilities that can work across different data types, most organizations still need to architect solutions that coordinate multiple storage systems. The key is to design these multi-modal capabilities into your data stack early, rather than trying to bolt them on later when your GenAI applications demand richer data integration. 


Cyber Hygiene Protecting Your Digital and Financial Health

Digital transformation has reshaped the commercial world, integrating technology into nearly every aspect of operations. That has brought incredible opportunities, but it has also opened doors to new threats. Cyber attacks are more frequent and sophisticated, with malevolent actors targeting everyone from individuals to major corporations and entire countries. It is no exaggeration to say that establishing, and maintaining, effective cyber hygiene has become indispensable. According to Microsoft’s 2023 Digital Defense Report, effective cyber hygiene could prevent 99% of cyber attacks. Yet cyber hygiene is not just about preventing attacks, it is also central to maintaining operational stability and resilience in the event of a cyber breach. In that event robust cyber hygiene can limit the operational, financial, and reputational impact of a cyber attack, thereby enhancing an entity’s overall risk profile. ... Even though it’s critical, data suggests that many organizations struggle to implement even basic cyber security measures effectively. For example, a 2024 survey by Extrahop, a Seattle-based cyber security services provider, found that over half of the respondents admitted to using at least one unsecured network protocol, making them susceptible to attacks.


Are Data Engineers Sleepwalking Towards AI Catastrophe?

Data engineers are already overworked. Weigel cited a study that indicated 80% of data engineering teams are already overloaded. But when you add AI and unstructured data to the mix, the workload issue becomes even more acute. Agentic AI provides a potential solution. It’s natural that overworked data engineering teams will turn to AI for help. There’s a bevy of providers building copilots and swarms of AI agents that, ostensibly, can build, deploy, monitor, and fix data pipelines when they break. We are already seeing agentic AI have real impacts on data engineering teams, as well as the downstream data analysts who ultimately are the ones requesting the data in the first place. ... Once human data engineers are out of the loop, bad things can start happening, Weigel said. They potentially face a situation where the volume of data requests–which originally were served by human data engineers but now are being served by AI agents–is beyond their capability to keep up. ... “We’re now back in the dark ages, where we were 10 years ago [when we wondered] why we need data warehouses,” he said. “I know that if person A, B, and C ask a question, and previously they wrote their own queries, they got different results. Right now, we ask the same agent the same question, and because they’re non-deterministic, they will actually create different queries every time you ask it. 


How cybercriminals are weaponizing AI and what CISOs should do about it

Security teams are using AI to keep up with the pace of AI-powered cybercrime, scanning large volumes of data to surface threats earlier. AI helps scan massive amounts of threat data, surface patterns, and prioritize investigations. For example, analysts used AI to uncover a threat actor’s alternate Telegram channels, saving significant manual effort. Another use case: linking sockpuppet accounts. By analyzing slang, emojis, and writing styles, AI can help uncover connections between fake personas, even when their names and avatars are different. AI also flags when a new tactic starts gaining traction on forums or social media. ... As more defenders turn to AI to make sense of vast amounts of threat data, it’s easy to assume that LLMs can handle everything on their own. But interpreting chatter from the underground is not something AI can do well without help. “This diffuse environment, rich in vernacular and slang, poses a hurdle for LLMs that are typically trained on more generic or public internet data,” Ian Gray, VP of Cyber Threat Intelligence at Flashpoint, told Help Net Security. The problem goes deeper than just slang. Threat actors often communicate across multiple niche platforms, each with its own shorthand and tone. 


How To Keep AI From Making Us Stupid

The allure of AI is undeniable. It drafts emails, summarizes lengthy reports, generates code snippets, and even whips up images faster than you can say “neural network.” This unprecedented convenience, however, carries a subtle but potent risk. A study from MIT has highlighted concerns that overuse of AI tools might be degrading our thinking capabilities. That degradation is the digital equivalent of using a GPS so much that you forget how to read a map. Suddenly, your internal compass points vaguely toward convenience and not much else. When we offload critical cognitive tasks entirely to AI, our muscles for those tasks can begin to atrophy, leading to cognitive offloading. ... Treat AI-generated content like a highly caffeinated first draft — full of energy but possibly a little messy and prone to making things up. Your job isn’t to simply hit “generate” and walk away, unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss. Or worse, your audience. Always, always, aggressively edit, proofread, and, most critically, fact-check every single output. ... The real risk isn’t AI taking over our jobs; it’s us letting AI take over our brains. To maintain your analytical edge, continuously challenge yourself. Practice skills that AI complements but doesn’t replace, such as critical thinking, complex problem-solving, nuanced synthesis, ethical judgment, and genuine human creativity.


Governance meets innovation: Protiviti’s strategy for secure, scalable growth in BFSI and beyond

In today’s BFSI landscape, technology alone is no longer a differentiator. True competitive advantage lies in the orchestration of innovation with governance. The deployment of AI in underwriting, the migration of customer data to the cloud, or the use of IoT in insurance all bring immense opportunity—but also profound risks. Without strong guardrails, these initiatives can expose firms to cyber threats, data sovereignty violations, and regulatory scrutiny. Innovation without governance is a gamble; governance without innovation is a graveyard. ... In cloud transformation projects, for instance, we work with clients to proactively assess data localisation risks, cloud governance maturity, and third-party exposures, ensuring resilience is designed from day one. As AI adoption scales across financial services, we bring deep expertise in Responsible AI governance. From ethical frameworks and model explainability to regulatory alignment with India’s DPDP Act and the EU AI Act, our solutions ensure that automated systems remain transparent, auditable, and trustworthy. Our AI risk models integrate regulatory logic into system design, bridging the gap between innovation and accountability.


Cybercriminals take malicious AI to the next level

Cybercriminals are tailoring AI models for specific fraud schemes, including generating phishing emails tailored by sector or language, as well as writing fake job posts, invoices, or verification prompts. “Some vendors even market these tools with tiered pricing, API access, and private key licensing, mirroring the [legitimate] SaaS economy,” Flashpoint researchers found. “This specialization leads to potentially greater success rates and automated complex attack stages,” Flashpoint’s Gray tells CSO. ... Cybercrime vendors are also lowering the barrier for creating synthetic video and voice, with deepfake as a service (DaaS) offerings ... “This ‘prompt engineering as a service’ (PEaaS) lowers the barrier for entry, allowing a wider range of actors to leverage sophisticated AI capabilities through pre-packaged malicious prompts,” Gray warns. “Together, these trends create an adaptive threat: tailored models become more potent when refined with illicit data, PEaaS expands the reach of threat actors, and the continuous refinement ensures constant evolution against defenses,” he says. ... Enterprises need to balance automation with expert analysis, separating hype from reality, and continuously adapt to the rapidly evolving threat landscape. “Defenders should start by viewing AI as an augmentation of human expertise, not a replacement,” Flashpoint’s Gray says. 


“DevOps is Dead? Long Live DevOps-Powered Platforms”

If DevOps and platform engineering needed a common enemy — or ally — to bond over, AI provided it. A panel featuring Nvidia, Google, Rootly and Thoughtworks explained how large language models are automating “the last mile” of toil, from incident response bots that reason over Grafana dashboards to code-gen pipelines that spit out compliant Terraform. ... The logic is straightforward: You can’t automate what you can’t see. For DevOps practitioners, high-fidelity telemetry is now table stakes — whether you’re feeding an agentic AI, debugging an ephemeral sandbox, or proving compliance to auditors. Expect platform blueprints to ship with observability baked in, not bolted on. Look at the badges behind every coffee urn and you’ll spot familiar DevOps and DevSecOps logos — GitHub Actions, Mezmo, Teleport, Cortex, Sedai, Tailscale. Many of these vendors cut their teeth in CI/CD, IaC, or shift-left security long before “platform engineering” was a LinkedIn hashtag. ... So why the funeral garb? My guess: A tongue-in-cheek jab at hype cycles. Just as “DevOps is dead” clickbait pushed us to sharpen our message, the sash was a reminder that real value — not buzzwords — keeps a movement alive. Judging by the hallway traffic and workshop queues, platform engineering is passing that test.

Daily Tech Digest - March 12, 2024

Thinking beyond BitLocker: Managing encryption across Microsoft services

There is more than BitLocker in an operating system that will allow control over encryption settings. Often you are mandated in a firm to ensure that all sensitive data at rest is kept secure. Older operating systems may not natively provide the necessary internal encryption of application-layer encryption. Specific group policies are included in Windows that target how passwords are stored. A case in point is the setting “Store passwords using reversible encryption”. This policy, if enabled, would lower the security posture of your firm. Older protocols being used in such locations as web servers and IIS may mandate that you enable these settings. Thus, you may want to audit your web servers to see if any developer mandate has indicated that you must have lesser protections in place. For example, if you use challenge handshake authentication protocol (CHAP) through remote access or internet authentication services (IAS), you must enable this policy setting. CHAP is an authentication protocol used by remote access and network connections. Digest authentication in internet information services (IIS) also requires that you enable this policy setting. 


EU’s use of Microsoft 365 found to breach data protection rules

More broadly, the EDPS’ corrective measures require the Commission to fix its contracts with Microsoft — to ensure they contain the necessary contractual provisions, organizational measures and/or technical measures to ensure personal data is only collected for explicit and specified purposes; and “sufficiently determined” in relation to the purposes for which they are processed. Data must also only be processed by Microsoft or its affiliates or sub-processors “on the Commission’s documented instructions”, per the order — unless it takes place within the region and processing is for a purpose that complies with EU or Member State law; or, if outside the region to be processed for another purpose under third-country law there must be essentially equivalent protection applied. The contracts must also ensure there is no further processing of data — i.e. uses beyond the original purpose for which data is collected. The EDPS found the Commission infringed the “purpose limitation” principle of applicable data protection rules by failing to sufficiently determine the types of personal data collected under the licensing agreement it concluded with Microsoft Ireland, meaning it was unable to ensure these were specific and explicit.


State Dept-backed report provides action plan to avoid catastrophic AI risks

The report focuses on two key risks: weaponization and loss of control. Weaponization includes risks such as AI systems that autonomously discover zero-day vulnerabilities, AI-powered disinformation campaigns and bioweapon design. Zero-day vulnerabilities are unknown or unmitigated vulnerabilities in a computer system that an attacker can use in a cyberattack. While there is still no AI system that can fully accomplish such attacks, there are early signs of progress on these fronts. Future generations of AI might be able to carry out such attacks. “As a result, the proliferation of such models – and indeed, even access to them – could be extremely dangerous without effective measures to monitor and control their outputs,” the report warns. Loss of control suggests that “as advanced AI approaches AGI-like levels of human- and superhuman general capability, it may become effectively uncontrollable.” An uncontrolled AI system might develop power-seeking behaviors such as preventing itself from being shut off, establishing control over its environment, or engaging in deceptive behavior to manipulate humans. 


Threat Groups Rush to Exploit JetBrains’ TeamCity CI/CD Security Flaws

Most recently, researchers with cybersecurity vendor GuidePoint Security that the operators behind the BianLian ransomware were exploiting the TeamCity vulnerabilities, initially trying to execute their backdoor malware written in the Go programming language. After failed attempts, the group turned to living-of-the-land methods, using a PowerShell implementation of the backdoor, which provided them with almost identical functionality, the researchers wrote in a report. They detected the attack during an investigation of malicious activity within a customer’s network. It was unclear which of the two vulnerabilities the BianLian attackers exploited, they wrote. After leveraging a vulnerable TeamCity instance to gain initial access, the bad actors were able to create new users in the build server and executed malicious commands that enabled them to move laterally through the network and run post-exploitation activities. ... “The threat actor was detected in the environment after attempting to conduct a Security Accounts Manager (SAM) credential dumping technique, which alerted the victim’s VSOC, GuidePoint’s DFIR team, and GuidePoint’s Threat Intelligence Team (GRIT) and initiated the in-depth review of this PowerShell backdoor,” the researchers wrote.


How cookie deprecation, first-party data and privacy regulations are impacting the data landscape

While advertisers must focus on forging their paths forward in a cookieless landscape, it’s worth considering what comes next for Google. As privacy concerns dwindle with the deprecation of third-party cookies, there’s good reason to believe that antitrust concerns will grow regarding the industry titan. The timing of Google’s deprecation of third-party cookies on Chrome, coming years after Safari and Firefox made the same move, is telling. The simple reality is that Google did not want to make this move until it could develop an alternate approach that enabled the tracking, targeting and monetization of logged-in Chrome users. Now that Google has had the time to secure its ad revenue against any major disruptions, it will end the cookie’s reign. This move will garner added scrutiny from regulators who have already set their antitrust sights on Google in the past. With the deprecation of third-party cookies, Google retains end-to-end control of a massive swath of the advertising technology that powers the internet, and the company is going to be sharing less and less of that power (in the form of data and insights) with its clients and other parties.


Typosquatting Wave Shows No Signs of Abating

Typosquatting criminals are constantly refining their craft in what seems to be a never-ending cat and mouse conflict. Several years ago, researchers discovered the homograph ploy, which substitutes non-Roman characters that are hard to distinguish when they appear on screen. ... In an Infoblox report from last April entitled "A Deep3r Look at Lookal1ke Attacks," the report's authors stated that "everyone is a potential target." "Cheap domain registration prices and the ability to distribute large-scale attacks give actors the upper hand," they wrote in the report. "Attackers have the advantage of scale, and while techniques to identify malicious activity have improved over the years, defenders struggle to keep pace." For instance, the report shows an increasing sophistication in the use of typosquatting lures: not just for phishing or simple fraud but also for more advanced schemes, such as combining websites with fake social media accounts, using nameservers for major spear-phishing email campaigns, setting up phony cryptocurrency trading sites, stealing multifactor credentials and substituting legitimate open-source code with malicious to infect unsuspecting developers.


Are private conversations truly private? A cybersecurity expert explains how end-to-end encryption protects you

The effectiveness of end-to-end encryption in safeguarding privacy is a subject of much debate. While it significantly enhances security, no system is entirely foolproof. Skilled hackers with sufficient resources, especially those backed by security agencies, can sometimes find ways around it. Additionally, end-to-end encryption does not protect against threats posed by hacked devices or phishing attacks, which can compromise the security of communications. The coming era of quantum computing poses a potential risk to end-to-end encryption, because quantum computers could theoretically break current encryption methods, highlighting the need for continuous advancements in encryption technology. Nevertheless, for the average user, end-to-end encryption offers a robust defense against most forms of digital eavesdropping and cyberthreats. As you navigate the evolving landscape of digital privacy, the question remains: What steps should you take next to ensure the continued protection of your private conversations in an increasingly interconnected world?


Tax-related scams escalate as filing deadline approaches

“[A] new scheme involves a mailing coming in a cardboard envelope from a delivery service. The enclosed letter includes the IRS masthead with contact information and a phone number that do not belong to the IRS and wording that the notice is ‘in relation to your unclaimed refund’,” the agency noted. Another scam involves phone calls: scammers, pretending to be IRS agents, call the victims and try to convince them that they owe money. They often target recent immigrants, sometimes contacting them in their native language, and threaten them with arrest, deportation, or license suspension if they don’t pay. Some additional tax-related scams the IRS is warning about: Tax identity theft – Scammers use a person’s identity number to file a tax return or unemployment compensation and claim refunds Phishing scams – Scammers send convincing emails posing as the IRS to make victims disclose personal and financial information Unethical tax return preparers – Individuals that pose as tax prepaprers but don’t actually file tax returns on behalf of the tax payer despite getting paid for the service. Or, if they do, they direct refunds into their own bank account rather than the taxpayer’s account.


Why cyberattacks need more publicity, not less

Regulators worldwide have recognized this lack of transparency and are tightening legislation to improve the disclosure of security incidents. New rules from the U.S. Securities and Exchange Commission (SEC) require companies to disclose a material cybersecurity incident publicly within four days of its discovery. The European Parliament’s Cyber Resilience Act (CRA) is also seeking to impose further reporting obligations regarding exploited vulnerabilities and incidents. These tougher obligations will force more transparency, although forward-thinking organizations are already championing the benefits of disclosure for the wider community. Supporting the argument for openness stems from a genuine fear of cyberattacks taking out the UK’s mission-critical infrastructure, such as energy, communications, and hospitals. But there’s added value to be gained, as visibility and accountability can be positive differentiators for businesses. Clear disclosure and reporting procedures demonstrate that an organization understands what’s required to maintain operational resilience when under attack.


10 things I’d never do as an IT professional

Moving your own files instead of copying them immediately makes me feel uneasy. This includes, for example, photos or videos from the camera or audio recordings from a smartphone or audio recorder. If you move such files, which are usually unique, you run the risk of losing them as soon as you move them. Although this is very rare, it cannot be completely ruled out. But even if the moving process goes smoothly: The data is then still only available once. If the hard drive in the PC breaks, the data is gone. If I make a mistake and accidentally delete the files, they are gone. These are risks that only arise if you start a move operation instead of a copy operation. ... For years, I used external USB hard drives to store my files. The folder structure on these hard drives was usually identical. There were the folders “My Documents,” “Videos,” “Temp,” “Virtual PCs,” and a few more. What’s more, all the hard drives were the same model, which I had once bought generously on a good deal. Some of these disks even had the same data carrier designation — namely “Data.” That wasn’t very clever, because it made it too easy to mix them up. So I ended up confusing one of these hard drives with another one at a late hour and formatted the wrong one.


AI-generated recipes won’t get you to Flavortown

“There are gradients of what is fine and not, AI isn’t making recipe development worse because there’s no guarantee that what it puts out works well,” Balingit said. “But the nature of media is transient and unstable, so I’m worried that there might be a point where publications might turn to an AI rather than recipe developers or cooks.” Generative AI still occasionally hallucinates and makes up things that are physically impossible to do, as many companies found out the hard way. Grocery delivery platform Instacart partnered with OpenAI, which runs ChatGPT, for recipe images. The results ranged from hot dogs with the interior of a tomato to a salmon Caesar salad that somehow created a lemon-lettuce hybrid. Proportions were off — as The Washington Post pointed out, the steak size in Instacart’s recipe easily feeds more people than planned. BuzzFeed also came out with an AI tool that recommended recipes from its Tasty brand. ... That explained why I instantly felt the need to double-check the recipes from chatbots. AI models can still hallucinate and wildly misjudge how the volumes of ingredients impact taste. Google’s chatbot, for example, inexplicably doubled the eggs, which made the cake moist but also dense and gummy in a way that I didn’t like.



Quote for the day:

“Expect the best. Prepare for the worst. Capitalize on what comes.” -- Zig Ziglar