Showing posts with label GitOps. Show all posts
Showing posts with label GitOps. Show all posts

Daily Tech Digest - February 25, 2026


Quote for the day:

"To strongly disagree with someone, and yet engage with them with respect, grace, humility and honesty, is a superpower" -- Vala Afshar



Is ‘sovereign cloud’ finally becoming something teams can deploy – not just discuss?

Historically, sovereign cloud discussions in Europe have been driven primarily by risk mitigation. Data residency, legal jurisdiction, and protection from international legislation have dominated the narrative. These concerns are valid, but they have framed sovereign cloud largely as a defensive measure – a way to reduce exposure – rather than as an enabler of innovation or value creation. Without a clear value proposition beyond compliance, sovereign cloud has struggled to compete with hyperscale public cloud platforms that offer scale, maturity, and rich developer ecosystems. The absence of enforceable regulation has further compounded this. ... Policymakers and enterprises are also beginning to ask a more practical question: where does sovereign cloud actually create the most value? The answer increasingly points to innovation ecosystems, critical national capabilities, and trust. First, there is a growing recognition that sovereign cloud can underpin domestic innovation, particularly in areas such as AI, advanced research, and data-intensive start-ups. Organisations working with sensitive datasets, intellectual property, or public funding often require cloud environments that are both scalable and secure. ... Second, the sovereign cloud is increasingly being aligned with critical digital infrastructure. Sectors like healthcare, energy, transportation, and defence depend on continuity, accountability, and control. 


India’s DPDP rules 2025: Why access controls are priority one for CIOs

The security stack has traditionally broken down at the point of data rendering or exfiltration. Firewalls and encryption protect the data in transit and at rest, but once the data is rendered on a screen, the risk of data breaches from smartphone cameras, screenshots, or unauthorized sharing occurs outside of the security stack’s ability to protect it. ... Poor enterprise access practices amplify this risk. Over-provisioned user accounts, inconsistent multi-factor authentication, poor logging, and the absence of contextual checks make it easy for insider threats, credential compromise, and supply chain breaches to succeed. Under DPDP, accountability also extends to processors, so third-party CRM or cloud access must meet the same security standards. ... Shift from trust by implication to trust by verification. Implement least-privilege access to ensure users view only required apps and data. Add device posture with device binding, location, time, watermarking and behavior analysis to deny suspicious access. ... Implement identity infrastructure for just-in-time access and automated de-Provisioning based on role changes. Record fine-grained, immutable logs (user, device, resource, date/time) for breach analysis and annual retention. ... Enable dynamic, user-level watermarks (injecting username, IP address, timestamp) for forensic analysis. Prohibit unauthorized screen capture, sharing, or download activity during sensitive sessions, while permitting approved business processes.


What really caused that AWS outage in December?

The back-story was broken by the Financial Times, which reported the 13-hour outage was caused by a Kiro agentic coding system that decided to improve operations by deleting and then recreating a key environment. AWS on Friday shot back to flag what it dubbed “inaccuracies” in the FT story. “The brief service interruption they reported on was the result of user error — specifically misconfigured access controls — not AI as the story claims,” AWS said. ... “The issue stemmed from a misconfigured role — the same issue that could occur with any developer tool (AI powered or not) or manual action.” That’s an impressively narrow interpretation of what happened. AWS then promised it won’t do it again. ... The key detail missing — which AWS would not clarify — is just what was asked and how the engineer replied. Had the engineer been asked by Kiro “I would like to delete and then recreate this environment. May I proceed?” and the engineer replied, “By all means. Please do so,” that would have been user error. But that seems highly unlikely. The more likely scenario is that the system asked something along the lines of “Do you want me to clean up and make this environment more efficient and faster?” Did the engineer say “Sure” or did the engineer respond, “Please list every single change you are proposing along with the likely result and the worst-case scenario result. Once I review that list, I will be able to make a decision.”


Model Inversion Attacks: Growing AI Business Risk

A model inversion attack is a form of privacy attack against machine learning systems in which an adversary uses the outputs of a model to infer sensitive information about the data used to train it. Rather than breaching a database or stealing credentials, attackers observe how a model responds to input queries and leverage those outputs, often including confidence scores or probability values, to reconstruct aspects of the training data that should remain private. ... This type of attack differs fundamentally from other ML attacks, such as membership inference, which aims to determine whether a specific data point was part of the training set, and model extraction, which seeks to copy the model itself. ... Successful model inversion attacks can inflict significant damage across multiple areas of a business. When attackers extract sensitive training data from machine learning models, organizations face not only immediate financial losses but also lasting reputational harm and operational setbacks that continue well beyond the initial incident. ... Attackers target inference-time privacy by moving through multiple stages, submitting carefully crafted queries, studying the model’s responses, and gradually reconstructing sensitive attributes from the outputs. Because these activities can resemble normal usage patterns, such attacks frequently remain undetected when monitoring systems are not specifically tuned to identify machine learning–related security threats.


It’s time to rethink CISO reporting lines

The age-old problem with CISOs reporting into CIOs is that it could present — or at least appear to present — a conflict of interest. Cybersecurity consultant Brian Levine, a former federal prosecutor who serves as executive director of FormerGov, says that concern is even more warranted today. “It’s the legacy model: Treat security as a technical function instead of an enterprise‑wide risk discipline,” he says. ... Enterprise CISOs should be reporting a notch higher, Levine argues. “Ideally, the CISO would report to the CEO or the general counsel, high-level roles explicitly accountable for enterprise risk. Security is fundamentally a risk and governance function, not a cost‑center function,” Levine points out. “When the CISO has independence and a direct line to the top, organizations make clearer decisions about risk, not just cheaper ones." ... Painter is “less dogmatic about where the CISO reports and more focused on whether they actually have a seat at the table,” he says. “Org charts matter far less than influence,” he adds. “Whether the CISO reports to the CIO, the CEO, or someone else, the real question is this: Are they brought in early, listened to, and empowered to shape how the business operates? When that’s true, the structure works. When it’s not, no reporting line will save it.” ... “When the CISO reports to the CIO, risk can be filtered, prioritized out of sight, or reshaped to fit a delivery narrative. It’s not about bad actors. It’s about role tension. And when that tension exists within the same reporting line, risk loses.”


AI drives cyber budgets yet remains first on the chop list

Cybersecurity budgets are rising sharply across large organisations, but a new multinational survey points to a widening gap between spending on artificial intelligence and the ability to justify that spending in business terms. ... "Security leaders are getting mandates to invest in AI, but nobody's given them a way to prove it's working. You can't measure AI transformation with pre-AI metrics," Wilson said. He added that security teams struggle to translate operational data into board-level evidence of reduced risk. "The problem isn't that security teams lack data. They're drowning in it. The issue is they're tracking the wrong things and speaking a language the board doesn't understand. Those are the budgets that get cut first. The window to fix this is closing fast," Wilson said. ... "We need new ways to measure security effectiveness that actually show business impact, because boards don't fund faster ticket closure, they fund measurable risk reduction and business resilience. We have to show that we're not just responding quickly but eliminating and improving the conditions that allow incidents to happen in the first place," he said. ... Security leaders reported pressure to invest in AI, while also struggling to link those investments to outcomes executives recognise as resilience and risk reduction. The report argues this tension may become harder to sustain if economic conditions tighten and boards begin looking for costs to cut.


A cloud-smart strategy for modernizing mission-critical workloads

As enterprises mature in their cloud journeys, many CIOs and senior technology leaders are discovering that modernization is not about where workloads run — it’s about how deliberately they are designed. This realization is driving a shift from cloud-first to cloud-smart, particularly for systems the business cannot afford to lose. A cloud-smart strategy, as highlighted by the Federal Cloud Computing Strategy, encourages agencies to weigh the long-term, total costs of ownership and security risks rather than focusing only on immediate migration. ... Sticking indefinitely with legacy systems can lead to rising maintenance costs, inability to support new business initiatives, security vulnerabilities and even outages as old hardware fails. Many organizations reach a tipping point where they must modernize to stay competitive. The key is to do it wisely — balancing speed and risk and having a solid strategy in place to navigate the complexity. ... A cloud-smart strategy aligns workload placement with business risk, performance needs and regulatory expectations rather than ideology. Instead of asking whether a system can move to the cloud, cloud-smart organizations ask where it performs best. ... Rather than lifting and shifting entire platforms, teams separate core transaction engines from decisioning, orchestration and experience layers. APIs and event-driven integration enable new capabilities around stable cores, allowing systems to evolve incrementally without jeopardizing operational continuity.


Enterprises still can't get a handle on software security debt – and it’s only going to get worse

Four-in-five organizations are drowning in software security debt, new research shows, and the backlog is only getting worse. ... "The speed of software development has skyrocketed, meaning the pace of flaw creation is outstripping the current capacity for remediation,” said Chris Wysopal, chief security evangelist at Veracode. “Despite marginal gains in fix rates, security debt is becoming a much larger issue for many organizations." Organizations are discovering more vulnerabilities as their testing programs mature and expand. Meanwhile, the accelerating pace of software releases creates a continuous stream of new code before existing vulnerabilities can be addressed. ... "Now that AI has taken software development velocity to an unprecedented level, enterprises must ensure they’re making deliberate, intelligent choices to stem the tide of flaws and minimize their risk," said Wysopal. The rise in flaws classed as both “severe” and “highly exploitable” means organizations need to shift from generic severity scoring to prioritization based on real-world attack potential, advised Veracode. As such, researchers called for a shift from simple detection toward a more strategic framework of Prioritize, Protect, and Prove. ... “We are at an inflection point where running faster on the treadmill of vulnerability management is no longer a viable strategy. Success requires a deliberate shift,” said Wysopal.


Protecting your users from the 2026 wave of AI phishing kits

To protect your users today, you have to move past the idea of reactive filtering and embrace identity-centric security. This means your software needs to be smart enough to validate that a user is who they say they are, regardless of the credentials they provide. We’re seeing a massive shift toward behavioral analytics. Instead of just checking a password, your platform should be looking at communication patterns and login behaviors. If a user who typically logs in from Chicago suddenly tries to authorize a high-value financial transfer from a new device in a different country, your system should do more than just send a push notification. ... Beyond the tech, you need to think about the “human” friction you’re creating. We often prioritize convenience over security, but in the current climate, that’s a losing bet. Implementing “probabilistic approval workflows” can help. For example, if your system’s AI is 95% sure a login is legitimate, let it through. If that confidence drops, trigger a more rigorous verification step. ... The phishing scams of 2026 are successful because they leverage the same tools we use for productivity. To counter them, we have to be just as innovative. By building identity validation and phishing-resistant protocols into the core of your product, you’re doing more than just securing data. You’re securing the trust that your business is built on. 


GitOps Implementation at Enterprise Scale — Moving Beyond Traditional CI/CD

Most engineering organizations running traditional CI/CD pipelines eventually hit the ceiling. Deployments work until they don’t, and when they break, the fixes are manual, inconsistent and hard to trace. ... We kept Jenkins and GitHub Actions in the stack for build and test stages where they already worked well. Harness remained an option for teams requiring more sophisticated approval workflows and governance controls. We ruled out purely script-based push deployment approaches because they offered poor drift control and scaled badly. ... Organizational resistance proved more challenging to address than the technical work. Teams feared the new approach would introduce additional bureaucracy. Engineers accustomed to quick kubectl fixes worried about losing agility. We ran hands-on workshops demonstrating that GitOps actually produced faster deployments, easier rollbacks and better visibility into what was running where. We created golden templates for common deployment patterns, so teams did not have to start from scratch. ... Unexpected benefits emerged after full adoption. Onboarding improved as deployment knowledge now lived in Git history and manifests rather than in senior engineers’ heads. Incident response accelerated because traceability let teams pinpoint exactly what changed and when, and rollback became a consistent, reliable operation. The shift from push-based to pull-based operations improved security posture by limiting direct cluster access.

Daily Tech Digest - January 06, 2025

Should States Ban Mandatory Human Microchip Implants?

“U.S. states are increasingly enacting legislation to pre-emptively ban employers from forcing workers to be ‘microchipped,’ which entails having a subdermal chip surgically inserted between one’s thumb and index finger," wrote the authors of the report. "Internationally, more than 50,000 people have elected to receive microchip implants to serve as their swipe keys, credit cards, and means to instantaneously share social media information. This technology is especially popular in Sweden, where chip implants are more widely accepted to use for gym access, e-tickets on transit systems, and to store emergency contact information.” ... “California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision," Singularity Hub wrote. "In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.” That same piece quotes Alan Mardinly, who is director of biology at Science Corporation, as saying that the advantages of a biohybrid implant are that it "can dramatically change the scaling laws of how many neuros you can interface with versus how much damage you do to the brain."


AI revolution drives demand for specialized chips, reshaping global markets

There’s now a shift toward smaller AI models that only use internal corporate data, allowing for more secure and customizable genAI applications and AI agents. At the same time, Edge AI is taking hold, because it allows AI processing to happen on devices (including PCs, smartphones, vehicles and IoT devices), reducing reliance on cloud infrastructure and spurring demand for efficient, low-power chips. “The challenge is if you’re going to bring AI to the masses, you’re going to have to change the way you architect your solution; I think this is where Nvidia will be challenged because you can’t use a big, complex GPU to address endpoints,” said Mario Morales, a group vice president at research firm IDC. “So, there’s going to be an opportunity for new companies to come in — companies like Qualcomm, ST Micro, Renesas, Ambarella and all these companies that have a lot of the technology, but now it’ll be about how to use it. ... Enterprises and other organizations are also shifting their focus from single AI models to multimodal AI, or LLMs capable of processing and integrating multiple types of data or “modalities,” such as text, images, audio, video, and sensory input. The input from diverse resources creates a more comprehensive understanding of that data and enhances performance across tasks.


How to Address an Overlooked Aspect of Identity Security: Non-human Identities

Compromised identities and credentials are the No. 1 tactic for cyber threat actors and ransomware campaigns to break into organizational networks and spread and move laterally. Identity is the most vulnerable element in an organization’s attack surface because there is a significant misperception around what identity infrastructure (IDP, Okta, and other IT solutions) and identity security providers (PAM, MFA, etc.) can protect. Each solution only protects the silo that it is set up to secure, not an organization’s complete identity landscape, including human and non-human identities (NHIs), privileged and non-privileged users, on-prem and cloud environments, IT and OT infrastructure, and many other areas that go unmanaged and unprotected. ... Most organizations use a combination of on-prem management tools, a mix of one or more cloud identity providers (IdPs), and a handful of identity solutions (PAM, IGA) to secure identities. But each tool operates in a silo, leaving gaps and blind spots that cause increased attacks and blind spots. 8 out of 10 organizations cannot prevent the misuse of service accounts in real-time due to visibility and security being sporadic or missing. NHIs fly under the radar as security and identity teams sometimes don’t even know they exist. 


Version Control in Agile: Best Practices for Teams

With multiple developers working on different features, fixes, or updates simultaneously, it’s easy for code to overlap or conflict without clear guidelines. Having a structured branching approach prevents confusion and minimizes the risk of one developer’s work interfering with another’s. ... One of the cornerstones of good version control is making small, frequent commits. In Agile development, progress happens in iterations, and version control should follow that same mindset. Large, infrequent commits can cause headaches when it’s time to merge, increasing the chances of conflicts and making it harder to pinpoint the source of issues. Small, regular commits, on the other hand, make it easier to track changes, test new functionality, and resolve conflicts early before they grow into bigger problems. ... An organized repository is crucial to maintaining productivity. Over time, it’s easy for the repository to become cluttered with outdated branches, unnecessary files, or poorly named commits. This clutter slows down development, making it harder for team members to navigate and find what they need. Teams should regularly review their repositories and remove unused branches or files that are no longer relevant. 


Abusing MLOps platforms to compromise ML models and enterprise data lakes

Machine learning operations (MLOps) is the practice of deploying and maintaining ML models in a secure, efficient and reliable way. The goal of MLOps is to provide a consistent and automated process to be able to rapidly get an ML model into production for use by ML technologies. ... There are several well-known attacks that can be performed against the MLOps lifecycle to affect the confidentiality, integrity and availability of ML models and associated data. However, performing these attacks against an MLOps platform using stolen credentials has not been covered in public security research. ... Data poisoning: This attack involves an attacker having access to the raw data being used in the “Design” phase of the MLOps lifecycle to include attacker-provided data or being able to directly modify a training dataset. The goal of a data poisoning attack is to be able to influence the data that is being trained in an ML model and eventually deployed to production. ... Model extraction attacks involve the ability of an attacker to steal a trained ML model that is deployed in production. An attacker could use a stolen model to extract sensitive training data such as the training weights used, or to use the predictive capabilities used in the model for their own financial gain. 


Get Going With GitOps

GitOps implementations have a significant impact on infrastructure automation by providing a standardized, repeatable process for managing infrastructure as code, Rose says. The approach allows faster, more reliable deployments and simplifies the maintenance of infrastructure consistency across diverse environments, from development to production. "By treating infrastructure configurations as versioned artifacts in Git, GitOps brings the same level of control and automation to infrastructure that developers have enjoyed with application code." ... GitOps' primary benefit is its ability to enable peer review for configuration changes, Peele says. "It fosters collaboration and improves the quality of application deployment." He adds that it also empowers developers -- even those without prior operations experience -- to control application deployment, making the process more efficient and streamlined. Another benefit is GitOps' ability to allow teams to push minimum viable changes more easily, thanks to faster and more frequent deployments, says Siri Varma Vegiraju, a Microsoft software engineer. "Using this strategy allows teams to deploy multiple times a day and quickly revert changes if issues arise," he explains via email. 


Balancing proprietary and open-source tools in cyber threat research

First, it is important to assess the requirements of an organization by identifying the capabilities needed, such as threat intelligence platforms or malware analysis tools. Next, evaluating open-source tools which can be cost-effective and customizable, but may require community support and frequent updates. In contrast, proprietary tools could offer advanced features, dedicated support, and better integration with other products. Finally, think about scalability and flexibility, as future growth may necessitate scalable solutions. ... The technology is not magic, but it is a powerful tool to speed up processes and bolster security procedures while also reducing the gap between advanced and junior analysts. However, as of today, the technology still requires verification and validation. Globally, the need for security experts with a dual skill set in security and AI will be in high demand. Because the adoption of generative AI systems increases, we need people who understand these technologies because threat actors are also learning. ... If a CISO needs to evaluate effectiveness of these tools, they first need to understand their needs and pain points and then seek guidance from experts. Adopting generative AI security solutions just because it is the latest trend is not the right approach.


Get your IT infrastructure AI-ready

Artificial intelligence adoption is a challenge many CIOs grapple with as they look to the future. Before jumping in, their teams must possess practical knowledge, skills, and resources to implement AI effectively. ... AI implementation is costly and the training of AI models requires a substantial investment. "To realize the potential, you have to pay attention to what it's going to take to get it done, how much it's going to cost, and make sure you're getting a benefit," Ramaswami said. "And then you have to go get it done." GenAI has rapidly transformed from an experimental technology to an essential business tool, with adoption rates more than doubling in 2024, according to a recent study by AI at Wharton ... According to Donahue, IT teams are exploring three key elements: choosing language models, leveraging AI from cloud services, and building a hybrid multicloud operating model to get the best of on-premise and public cloud services. "We're finding that very, very, very few people will build their own language model," he said. "That's because building a language model in-house is like building a car in the garage out of spare parts." Companies look to cloud-based language models, but must scrutinize security and governance capabilities while controlling cost over time. 


What is an EPMO? Your organization’s strategy navigator

The key is to ensure the entire strategy lifecycle is set up for success rather than endlessly iterating to perfect strategy execution. Without properly defining, governing, and prioritizing initiatives upfront, even the best delivery teams will struggle to achieve business goals in a way that drives the right return for the organization’s investment. For most organizations, there’s more than one gap preventing desired results. ... The EPMO’s job is to strip away unnecessary complexity and create frameworks that empower teams to deliver faster, more effectively, and with greater focus. PMO leaders should ask how this process helps to hit business goals faster. So by eliminating redundant meetings and scaling governance to match project size and risk, delivery timelines can shorten. This kind of targeted adjustment keeps momentum high without sacrificing quality or control. ... For an EPMO to be effective, ideally it needs to report directly to the C-suite. This matters because proximity equals influence. When the EPMO has visibility at the top, it can drive alignment across departments, break down silos, drive accountability, and ensure initiatives stay connected to overall business objectives serving as the strategy navigator for the C-suite.


Data Center Hardware in 2025: What’s Changing and Why It Matters

DPUs can handle tasks like network traffic management, which would otherwise fall to CPUs. In this way, DPUs reduce the load placed on CPUs, ultimately making greater computing capacity available to applications. DPUs have been around for several years, but they’ve become particularly important as a way of boosting the performance of resource-hungry workloads, like AI training, by completing AI accelerators. This is why I think DPUs are about to have their moment. ... Recent events have underscored the risk of security threats linked to physical hardware devices. And while I doubt anyone is currently plotting to blow up data centers by placing secret bombs inside servers, I do suspect there are threat actors out there vying to do things like plant malicious firmware on servers as a way of creating backdoors that they can use to hack into data centers. For this reason, I think we’ll see an increased focus in 2025 on validating the origins of data center hardware and ensuring that no unauthorized parties had access to equipment during the manufacturing and shipping processes. Traditional security controls will remain important, too, but I’m betting on hardware security becoming a more intense area of concern in the year ahead.



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - August 02, 2023

Return-to-office mandates rise as worker productivity drops

In the first quarter of 2023, labor productivity dropped 2.1% in the US, even as the number of hours worked increased by 2.6%, according to the BLS. The highest levels of remote workers are in North America and Northern Europe, with lower levels in Southern Europe, and even fewer still in Asia — particularly in developing countries, according to a study by Stanford University’s Institute for Economic Policy Research (SIEPR) released in July. ... “Bosses want workers back in the office; workers want flexibility,” said Peter Miscovich, the managing director of Jones Lang LaSalle IP (JLL), a global real estate investment and management firm that tracks remote work trends. But current return-to-office mandates haven't always been effective and they risk driving employees away, according to Miscovich. "Given current low-unemployment rates — particularly in technology fields — talent has the upper hand and will have the upper hand over the next 10 to 15 years,” Miscovich said. While some companies have drawn attention for heavy-handed tactics to get employees back to the office, others are succeeding for getting buy-in for structured hybrid work policies.


IT professionals: avoiding bad days at work

The most common cause of stress is work-related, with one recent study showing that 79% of UK professionals say they frequently feel stressed and our own research revealed that over two-thirds of IT leaders(70%) reported that there is pressure to deliver security protection in a short amount of time. Whilst organisations must be able to identify the sources of stress to support their people, unfortunately, it must be noted that due to the nature of working with technology, IT professionals will encounter stressful situations – whether the solution is to turn it off and on again or something much more serious. Having the right mix of people, processes and technology will assist in minimising these situations; however, when they do occur, it is vital that leaders are able to recognise these situations and support their people This comes back to ensuring the most appropriate technology is in place, along with having clear plans and processes in place to best support the needs of the organisation, its people and its customers.


Why synthetic data is a must for AI in telecom

Synthetic data reflects real-world data both mathematically and statistically. But rather than being collected from and measured in the real world, it is created by computer simulations, algorithms, simple rules, statistical modeling, simulation and other techniques based on small, anonymized real-world samples. “While real data is almost always the best source of insights from data, real data is often expensive, imbalanced, unavailable or unusable due to privacy regulations,” Gartner VP analyst Alexander Linden said in a Q&A blog post. “Synthetic data can be an effective supplement or alternative to real data.” Artificial data can help mitigate weaknesses in real data or can be used when no live data exists, when data is highly sensitive or otherwise biased, or can’t be used, shared or moved. But it doesn’t always have to be trained on real data, however: It can be generated just by looking at domain or institutional knowledge or traces of real data. With the massive explosion in the use of data-hungry generative AI models and the necessity of privacy and security, enterprises across industry segments are recognizing the potential in synthetic data


DDoS Attacks and the Cyber Threatscape

Occasionally, DDoS attacks were carried out to extort ransom payments, colloquially known as Ransom DDoS (RDDoS) attacks. The RDDoS attack should not be mistaken for ransomware, which may be driven by similar motivations but employs different tactics, techniques, and procedures (TTPs). The operational method in ransomware requires ‘denial of data’ by a malicious script, whereas RDDoS involves denial of service, generally by a botnet. Running a ransomware operation requires access to internal systems, which is not the case in ransom DDoS attacks. In RDDoS, threat actors leverage the threat of denial of service to conduct extortion, which may include sending a private message by email demanding ransom amount to prevent the organisation from being targeted by a DDoS attack. According to a threat intelligence report, throughout the 2020–2021 global RDDoS campaigns, attacks ranged from few hours up to several weeks with attack rates of 200 Gbps and higher. The DDoS attack can also serve as a means of reconnaissance, allowing attackers to assess the target’s vulnerabilities and gauge the strength of its defenses.


MDM’s Role in Strengthening Data Governance Practices

Ensuring regulatory compliance and the trustworthiness of data is paramount. This is where a systematic process comes into play, and Gartner MDM is leading the way in providing a comprehensive solution. With the ability to configure data governance policies, capture metadata, and perform data lineage, Gartner MDM allows for a full understanding of data assets and their use. This translates into improved compliance, reduced risk, and enhanced data trustworthiness. By implementing a systematic process that includes Gartner MDM, organizations can confidently navigate the complex landscape of regulatory requirements, safeguard data integrity, and ultimately increase customer trust. ... Data Governance has become essential with the ever-increasing amount of data organizations generate. However, manually reviewing and managing such a large amount of data can be challenging and time-consuming. This is where automation techniques come into play. By automating data governance processes, organizations can streamline the process, reduce errors, and make better decisions resulting from the data. 


Delivering privacy in a world of pervasive digital surveillance: Tor Project’s Executive Director speaks out

Our stance is clear, we think that encryption is a right – which is why it is built into our technology. As more and more aspects of our lives are carried out digitally, whether it is conducting financial transactions, accessing health care services or staying in touch with friends and loved ones, our online activity should be governed by the same rights to privacy and anonymity as our analog experiences. As part of our work, the Tor Project is currently active in the debate around the need to safeguard EE2E. We are engaged in advocacy work on the issue and have supported other organizations in their efforts to raise awareness, especially as part of the Global Encryption Coalition. ... Earlier this year, we launched the Mullvad Browser, a free, privacy-preserving browser offering similar protections as Tor Browser without the Tor network. Mullvad Browser is another option for internet users who are looking for a privacy-focused browser that doesn’t need a bunch of extensions and plugins to enhance their privacy and reduce the factors that can accidentally de-anonymize themselves.


The Debate Around AI Ethics in Australia is Falling Far Behind

In 2016, the World Economic Forum looked at the top nine ethical issues in artificial intelligence. These issues have all been well-understood for a decade (or longer), which is what makes the lack of movement in addressing them so concerning. In many cases, the concerns the WEF highlighted, which were future-thinking at the time, are starting to become reality, yet the ethical concerns have yet to be actioned. ... The WEF noted the potential for AI bias back in its initial article, and this is one of the most talked-about and debated AI ethics issues. There are several examples of AI assessing people of color and gender differently. However, as UNESCO noted just last year, despite the decade of debate, biases of AI remain fundamental right down to the core. “Type ‘greatest leaders of all time’ in your favorite search engine, and you will probably see a list of the world’s prominent male personalities. How many women do you count? An image search for ‘school girl’ will most probably reveal a page filled with women and girls in all sorts of sexualised costumes. ...”


Vigilance advised if using AI to make cyber decisions

Artificial intelligence (AI) and machine learning (ML) driven tools and technologies are on the rise to help organizations address these challenges by significantly improving their security posture efficiently and effectively. Tools using ML and AI are improving accuracy and speed of response. ... The vendor may have utilised AI in various product development stages. For instance, AI could have been employed to shape the requirements and design of the product, review its design or even generate source code. Additionally, AI might have been used to select relevant open-source code, develop test plans, write the user guide or create marketing content. In some cases, AI could be a functional product component. However, it’s important to note that sometimes an AI capability might really be machine learning (ML). Determining the legitimacy of AI claims can be challenging: the vendor’s transparency and supporting evidence are crucial. Weighing the vendor’s reputation, expertise and track record in AI development is vital for distinguishing authentic AI-powered products from “snake oil.”


3 GitOps Myths Busted

It is highly likely that as your organization embarks on its cloud native journey, there will come a point where scaling to multiclusters becomes necessary. For instance, developers may need to work on and test applications before making pull requests without having direct access to the production code, of course, for applications running in production on Kubernetes. Moreover, in certain scenarios, a team might manage multiple clusters and distribute workloads among them to ensure sufficient fault tolerance and availability. For example, when running a machine learning training workload, the team might increase the number of replicas or cluster replicas to meet specific demands. Additionally, different clusters may be deployed across various physical locations in cloud environments, whether on Amazon Web Services, Azure, GCP and others, requiring separate tools and processes to align with geographic mandates, legal restrictions, compliance requirements, and data access policies.


Simplifying IT strategy: How to avoid the annual planning panic

In developing your strategy, you have two responsibilities related to the finances of any proposed project: First, you must articulate the costs and benefits of the project; and second, you must contextualize those costs and benefits by comparing them to overall budget projections, which should include multi-year projections that align with the needs and norms of your finance organization. Not sure how to frame the numbers? Borrow revenue projections from FP&A, then layer in projected IT run-rate spend, IT project spend for each year in the forecast, and summarize total IT spend as a percentage of revenue. Hint: Be ready to explain any increase in this metric. ... What will you need from others for your plan to succeed? Dedicated resources from BUs and functions? Participation in steering committees? Incremental funding? The point is you can’t drive a transformation alone. Key to success will be clarifying roles and responsibilities and ensuring others have skin in the game. ... Once you’ve tried answering the questions, consult your deputies. Test and refine your hypothesis as a group. 



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - October 30, 2021

Ransomware Attacks Are Evolving. Your Security Strategy Should, Too

Modern ransomware attacks typically include various tactics like social engineering, email phishing, malicious email links and exploiting vulnerabilities in unpatched software to infiltrate environments and deploy malware. What that means is that there are no days off from maintaining good cyber-hygiene. But there’s another challenge: As an organization’s defense strategies against common threats and attack methods improve, bad actors will adjust their approach to find new points of vulnerability. Thus, threat detection and response require real-time monitoring of various channels and networks, which can feel like a never-ending game of whack-a-mole. So how can organizations ensure they stay one step ahead, if they don’t know where the next attack will target? The only practical approach is for organizations to implement a layered security strategy that includes a balance between prevention, threat detection and remediation – starting with a zero-trust security strategy. Initiating zero-trust security requires both an operational framework and a set of key technologies designed for modern enterprises to better secure digital assets. 


Stateful Applications in Kubernetes: It Pays to Plan Ahead

Maybe you want to go with a pure cloud solution, like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) or Azure Kubernetes Service (AKS). Or perhaps you want to use your on-premises data center for solutions like RedHat’s OpenShift or Rancher. You’ll need to evaluate all the different components required to get your cluster up and running. For instance, you’ll likely have a preferred container network interface (CNI) plugin that meets your project’s requirements and drives your cluster’s networking. Once your clusters are operational and you’ve completed the development phase, you’ll begin testing your application. But now, your platform team is struggling to maintain your stateful application’s availability and reliability. As part of your stateful application, you’ve been using a database like Cassandra, MongoDB or MySQL. Every time a container is restarted, you begin to see errors in your database. You can prevent these errors with some manual intervention, but then you’re missing out on the native automation capabilities of Kubernetes.


Understanding Kubernetes Compliance and Security Frameworks

Compliance has become crucial for ensuring business continuity, preventing reputational damage and establishing the risk level for each application. Compliance frameworks aim to address security and privacy concerns through easy monitoring of controls, team-level accountability and vulnerability assessment—all of which present unique challenges in a K8s environment. To fully secure Kubernetes, a multi-pronged approach is needed: Clean code, full observability, preventing the exchange of information with untrusted services and digital signatures. One must also consider network, supply chain and CI/CD pipeline security, resource protection, architecture best practices, secrets management and protection, vulnerability scanning and container runtime protection. A compliance framework can help you systematically manage all this complexity. ... The Threat Matrix for Kubernetes, developed from the widely recognized MITRE ATT@CK (Adversarial Tactics, Techniques & Common Knowledge) Matrix, takes a different approach based on today’s leading cyberthreats and hacking techniques.


Authentication in Serverless Apps—What Are the Options?

In serverless applications, there are many components interacting—not only end users and applications but also cloud vendors and applications. This is why common authentication methods, such as single factor, two-factor and multifactor authentication offer only a bare minimum foundation. Serverless authentication requires a zero-trust mentality—no connection should be trusted, even communication between internal components of an application should be authenticated and validated. To properly secure serverless authentication, you also need to use authentication and authorization protocols, configure secure intraservice permissions and monitor and control incoming and outgoing access. ... A network is made accessible through a SaaS offering to external users. Access will be restricted, and every user will require the official credentials to achieve that access. However, this brings up the same problems raised above—the secrets must be stored somewhere. You cannot manage how your users access and store the credentials that you provide them with; therefore, you should assume that their credentials are not being kept securely and that they may be compromised at any point.


The economics behind adopting blockchain

If we take the insurance sector as a use case, we can see how blockchain mitigates various issues around information asymmetries. One fundamental concern in the insurance sector is the principal-agent problem, which stems from conflicting incentives amidst information asymmetry between the principal (insurer company) and the agent (of the company). Some adverse outcomes of this include unprofessional conduct, agents forging documents to meet assigned targets as well as a misrepresentation of the compliances, often leading to misselling of insurance products. These problems occur primarily due to the absence of an integrated mechanism to track and prevent fraudulent conduct of the agent. In such a scenario, blockchain has the ability to bridge the gaps and enhance the customer experience by virtue of providing a distributive, immutable and transparent rating system that allows agents to be rated according to their performance by companies as well as clients.


Techstinction - How Technology Use is Having a Severe Impact on our Climate

Like most large organisations, there is a general consciousness of the impact the Financial Services Industry is having on the environment. All three of these banks are taking serious measures to reduce their CO2 emissions and to change the behaviours of their staff. The Natwest group (who own RBS) for example recently published a working from home guide to their employees containing tips on how to save energy. Whilst this and all sustainability measures should be applauded, it’s important to acknowledge that "Sustainability in our work place" is very different and less important than "sustainability in our work", simply because there is more to be gained by optimising what we are doing as opposed to where we do it, both financially and for the environment. Sustainability in our work involves being lean in everything we do, including the hardware infrastructure being used, being completely digital in the services provided as well as how we produce software to deliver these services. All the major cloud providers invest heavily in providing energy efficient infrastructure as well using renewable energy sources.


How machine learning speeds up Power BI reports

Creating aggregations you don't end up using is a waste of time and money. "Creating thousands, tens of thousands, hundreds of thousands of aggregations will take hours to process, use huge amounts of CPU time that you're paying for as part of your licence and be very uneconomic to maintain," Netz warned. To help with that, Microsoft turned to some rather vintage database technology dating back to when SQL Server Analysis Service relied on multidimensional cubes, before the switch to in-memory columnar stores. Netz originally joined Microsoft when it acquired his company for its clever techniques around creating collections of data aggregations. "The whole multidimensional world was based on aggregates of data," he said. "We had this very smart way to accelerate queries by creating a collection of aggregates. If you know what the user queries are, [you can] find the best collection of aggregates that will be efficient, so that you don't need to create surplus aggregates that nobody's going to use or that are not needed because some other aggregates can answer [the query].


How GitOps Benefits from Security-as-Code

The emergence of security-as-code signifies how the days of security teams holding deployments up are waning. “Now we have security and app dev who are now in this kind of weird struggle — or I think historically had been — but bringing those two teams together and allowing flexibility, but not getting in the way of development is really to me where the GitOps and DevSecOps emerge. That’s kind of the big key for me,” Blake said. ... Developers today are deploying applications in an often highly distributed microservices environment. Security-as-code serves to both automate security for CI/CD with GitOps while also ensuring security processes are taking interconnectivity into account. “It’s sort of a realization that everything is so interconnected — and you can have security problems that can cause operational problems. If you think about code quality, one of your metrics for ‘this is good code’ doesn’t cause a security vulnerability,” Omier said. “So, I think a lot of these terms really come from acknowledging that you can’t look at individual pieces, when you’re thinking about how we are doing? ..."


The role of Artificial Intelligence in manufacturing

There are few key advantages which make the adoption of AI particularly suitable as launching pads for manufacturers to embark on their cognitive computing journey – intelligent maintenance, intelligent demand planning and forecasting, and product quality control. The deployment of AI is a complex process, as with many facets of digitisation, but it has not stopped companies from moving forward. The ability to grow and sustain the AI initiative over time, in a manner that generates increasing value for the enterprise, is likely to be crucial to achieving early success milestones on an AI adoption journey. Manufacturing companies are adopting AI and ML with such speed because by using these cognitive computing technologies, organisations can optimise their analytics capabilities, make better forecasts and decrease inventory costs. Improved analytics capabilities enable companies to switch to predictive maintenance, reducing maintenance costs and reducing downtime. The use of AI allows manufacturers to predict when or if functional equipment will fail so that maintenance and repairs can be scheduled in advance.


What the metaverse means for brand experiences

The metaverse is best described as a 3D World Wide Web or a digital facsimile of the physical world. In this realm, users can move about, converse with other users, make purchases, hold meetings, and engage in all manner of other activities. In the metaverse, all seats at live performances are front and center, sporting events are right behind home plate or center court, and of course, all avatars remain young and beautiful — if that’s what you desire — forever. As you might imagine, this is a marketer’s dream. Anheuser-Busch InBev global head of technology and innovation Lindsey McInerney explained to Built In recently that marketing is all about getting to where the people are, and a fully immersive environment is ripe with all manner of possibilities, from targeted marketing and advertising opportunities to fully virtualized brand experiences. Already, companies like ABB are experimenting with metaverse-type marketing opportunities, such as virtual horse racing featuring branded NFTs.



Quote for the day:

"Making those around you feel invisible is the opposite of leadership." -- Margaret Heffernan

Daily Tech Digest - June 29, 2021

How to eliminate ransomware risk, not just manage it

There are a multitude of solutions available, all of which are designed to reduce risk and protect specific areas of the network. However, there is one method that is rising in popularity and has proven to be highly effective. Zero Trust approaches to security are being applied by organisations on a daily basis, developed on the grounds that trust should never be given out superfluously – transitioning from “Trust but Verify” to “Verify, then Trust”. Forrester recently announced that Zero Trust can reduce an organisation’s risk exposure by 37% or more. This model eliminates automatic access for any asset, whether internal or external. It instead assumes that the context of any action must be validated before it can be allowed to proceed. Another technique that has emerged as being one of the best for protecting businesses from ransomware attacks, and that is closely aligned to the Zero Trust model, is micro-segmentation. Micro-segmentation restricts adversary lateral movement through the network and reduces a company’s attack surface. A strong security perimeter, whilst important, is no longer enough to protect business IT networks from ransomware threats – since it just takes one breach of the perimeter to compromise the network.


How to conquer synthetic identity fraud

The arrival of our truly physical-digital existence has forced identity protection to the forefront of our minds and amplified the need to understand how, through technology, our identities and behavior can be used to equalize and authenticate our access to all of life’s experiences. Second, there’s been an exceptional rise in all types fraud, including synthetic. Tackling this will require an intelligent, coordinated defense against cybercriminals employing new and more sophisticated techniques. Not unlike a police database that tracks criminals in different states, there’s a need for platforms where companies can anonymously share data signatures about bad actors with one another so that fraudulent activity becomes much easier to detect. According to the Aite Group, 72% of financial services firms surveyed believe synthetic identity fraud is a much more pressing issue than identity theft, and the majority plan to make substantive changes in the next two years. With collaboration driving that change, we have seen some cases of increasing synthetic fraud detection by more than 100% and the ability to catch overall forged documents by 8% in certain platforms.


A Look at GitOps for the Modern Enterprise

In the GitOps workflow, the system’s desired configuration is maintained in a source file stored in the git repository with the code itself. The engineer will make changes to the configuration files representing the desired state instead of making changes directly to the system via CLI. Reviewing and approving of such changes can be done through standard processes such as — pull requests, code reviews, and merges to the master branch. When the changes are approved and later merged to the master branch, an operator software process is accountable for switching the system’s current state to the desired state based on the configuration stored in the newly updated source file. In a typical GitOps implementation, manual changes are not allowed, and all changes to the configuration should be done to files put in Git. In a severe case, authority to change the system is given just to the operator software process. In a GitOps model, the infrastructure and operations engineers’ role changes from implementing the infrastructure modifications and application deployments to developing and supporting the automation of GitOps and assisting teams in reviewing and approving changes via Git.


Preventing Transformational Burnout

Participation can be viewed as a strain—it’s a tool that comes in different sizes and models and it is useful. Still, when individuals are forced to participate in anything that doesn’t resonate with their inner motivation, a leader is the one pulling the trigger of burnout. Note that passion is often thought to serve as a band-aid to the individual burnout when there is the perception that, “I care so much I must put all my efforts in the matter.” In situations where management doesn’t wish to share decision-making control with others, where employees or other stakeholders are passive or apathetic (or suffering from individual burnout), or in organizational cultures that take comfort in bureaucracy, pushing participatory efforts may be unwise. Luckily, agile stems from participation and self-organization. As you plan for employee participation in your transformation efforts, it’s important to have realistic expectations. Not all “potential associates” desire to participate and those that do may not yet have the skills to do so productively. As Jean Neumann found in her research on participation in the manufacturing industry, various factors can lead individuals to rationally choose to “not” participate. Neumann further notes, as have others, that participation requires courage.


How AutoML helps to create composite AI?

The most straightforward method for solving the optimization task is a random search for the appropriate block combinations. But a better choice is meta-heuristic optimization algorithms: swarm and evolutionary (genetic) algorithms. But in the case of evolutionary algorithms, one should keep in mind that they should have specially designed crossover, mutation, and selection operators. Such special operators are important for processing the individuals described by a DAG, they also give a possibility to take multiple objective functions into account and include additional procedures to create stable pipelines and avoid overcomplication. The crossover operators can be implemented using subtree crossover schemes. In this case, two parent individuals are chosen and exchange random parts of their graphs. But this is not the only possible way of implementation, there may be more semantically complex variants (e.g., one-point crossover). Implementation of the mutation operators may include random change of a model (or computational block) in a random node of the graph, removal of a random node, or random addition of a subtree.


AI is driving computing to the edge

Companies are adopting edge computing strategies because the cost of sending their ever-increasing piles of data to the cloud — and keeping it there — has become too expensive. Moreover, the time it takes to move data to the cloud, analyze it, and then send an insight back to the original device is too long for many jobs. For example, if a sensor on a factory machine senses an anomaly, the machine’s operator wants to know right away so she can stop the machine (or have a controller stop the machine). Round-trip data transfer to the cloud takes too long. That is why many of the top cloud workloads seen in the slide above involve machine learning or analysis at the edge. Control logic for factories and sensor fusion needs to happen quickly for it to be valuable, whereas data analytics and video processing can generate so much data that sending it and working on that data in the cloud can be expensive. Latency matters in both of those use cases as well. But a couple of other workloads on the slide indicate where the next big challenge in computing will come from. Two of the workloads listed on the slide involve data exchanges between multiple nodes. 


Data-Wiping Attacks Hit Outdated Western Digital Devices

Storage device manufacturer Western Digital warns that two of its network-attached storage devices - the WD My Book Live and WD My Book Live Duo - are vulnerable to being remotely wiped by attackers and now urges users to immediately disconnect them from the internet. ... The underlying flaw in the newly targeted WD devices is designated CVE-2018-18472 and was first publicly disclosed in June 2019. "Western Digital WD My Book Live (all versions) has a root remote command execution bug via shell metacharacters in the /api/1.0/rest/language_configuration language parameter. It can be triggered by anyone who knows the IP address of the affected device," the U.S. National Vulnerability Database noted at the time. Now, it says, the vulnerability is being reviewed in light of the new attacks. "We are reviewing log files which we have received from affected customers to further characterize the attack and the mechanism of access," Western Digital says. "The log files we have reviewed show that the attackers directly connected to the affected My Book Live devices from a variety of IP addresses in different countries. 


IoT For 5G Could Be Next Opportunity

On the consumer front, a technology currently being planned for inclusion in the forthcoming 3GPP Release 17 document called NR Light (or Lite), looks very promising. Essentially functioning as a more robust, 5G network-tied replacement for Bluetooth, NR Light is designed to enable the low latency, high security, and cloud-powered applications of a cellular connection, without the high-power requirements for a full-blown 5G modem. Practically speaking, this means we could see things like AR headsets, that are tethered to a 5G connected smartphone, use NR Light for their cloud connectivity, while being much more power-friendly and battery efficient. Look for more on NR Light in these and other applications that require very low power in 2022. At the opposite end of the spectrum, some carriers are starting the process of “refarming” the radio spectrum they’re currently using to deliver 2G and 3G traffic. In other words, they’re going to shut those networks down in order to reuse those frequencies to deliver more 5G service. The problem is, much of the existing IoT applications are using those older networks, because they’re very well-suited to the lower data rates used by most IoT devices.


Security and automation are top priorities for IT professionals

Organizations across the globe have experienced crippling cyberattacks over the past year that have significantly impacted the global supply chain. Due to the growing number of threats, 61% of respondents said that improving security measures continues to be the dominant priority. Cybersecurity systems topped the list of what IT professionals plan to invest in for 2022, with 53% of respondents planning to budget for email security tools such as phishing prevention, and 33% of respondents investing in ransomware protection. Cloud technologies were also top of mind this year, with 54% saying their IaaS cloud spending will increase and 36% anticipating growth in spending on SaaS applications. Cloud migration was also a high priority for respondents in 2021, which accounted for migrations across PaaS, IaaS and SaaS software. IT professionals also want to increase their productivity through automation, which ranked second in top technologies for investment. Almost half of respondents stated that they will allocate funds for this in 2021.


Making fungible tokens and NFTs safer to use for enterprises

One of the weaknesses of current token exchange systems is the lack of privacy protection they feature beyond a very basic pseudonymization. In Bitcoin, for example, transactions are pseudonymous and reveal the Bitcoin value exchanged. That makes them linkable and traceable, presenting threats that are inadmissible in other settings such as enterprise networks, in a supply chain or in finance. While some newer cryptocurrencies offer a higher degree of privacy, entirely concealing the actual asset exchanged and transaction participants, they retain the permissionless character of Bitcoin and others, which presents challenges on the regulatory compliance side. For enterprise blockchains, a permissioned setting is required, in which the identity of participants issuing and exchanging tokens is concealed, yet non-repudiatable, and transaction participants can be securely identified upon properly authorized requests. A big conundrum in permissioned blockchains exists in accommodating the use of token payment systems while at the same time preserving the privacy of the parties involved and still allowing for auditing functionalities.



Quote for the day:

"The quality of a leader is reflected in the standards they set for themselves." -- Ray Kroc