Showing posts with label microservice. Show all posts
Showing posts with label microservice. Show all posts

Daily Tech Digest - July 03, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


The Goldilocks Theory – preparing for Q-Day ‘just right’

When it comes to quantum readiness, businesses currently have two options: Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these, PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages principles of quantum physics, such as superposition, to securely distribute encryption keys. Although great in theory, it needs extensive new infrastructure, including bespoke networks and highly specialised hardware. More importantly, it also lacks authentication capabilities, severely limiting its practical utility. PQC, on the other hand, comprises classical cryptographic algorithms specifically designed to withstand quantum attacks. It can be integrated into existing digital infrastructures with minimal disruption. ... Imagine installing new quantum-safe algorithms prematurely, only to discover later they’re vulnerable, incompatible with emerging standards, or impractical at scale. This could have the opposite effect and could inadvertently increase attack surface and bring severe operational headaches, ironically becoming less secure. But delaying migration for too long also poses serious risks. Malicious actors could be already harvesting encrypted data, planning to decrypt it when quantum technology matures – so businesses protecting sensitive data such as financial records, personal details, intellectual property cannot afford indefinite delays.


Sovereign by Design: Data Control in a Borderless World

The regulatory framework for digital sovereignty is a national priority. The EU has set the pace with GDPR and GAIA-X. It prioritizes data residency and local infrastructure. China's cybersecurity law and personal information protection law enforce strict data localization. India's DPDP Act mandates local storage for sensitive data, aligning with its digital self-reliance vision through platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data to stay within the country for the sake of national security. Australia's privacy act focuses on data privacy, especially for health records, and Canada's PIPEDA encourages local storage for government data. Saudi Arabia's personal data protection law enforces localization for sensitive sectors, and Indonesia's personal data protection law covers all citizen-centric data. Singapore's PDPA balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's GDPR, mandates the protection of privacy and fundamental rights of its citizens. ... Tech companies have little option but to comply with the growing demands of digital sovereignty. For example, Amazon Web Services has a digital sovereignty pledge, committing to "a comprehensive set of sovereignty controls and features in the cloud" without compromising performance.


Agentic AI Governance and Data Quality Management in Modern Solutions

Agentic AI governance is a framework that ensures artificial intelligence systems operate within defined ethical, legal, and technical boundaries. This governance is crucial for maintaining trust, compliance, and operational efficiency, especially in industries such as Banking, Financial Services, Insurance, and Capital Markets. In tandem with robust data quality management, Agentic AI governance can substantially enhance the reliability and effectiveness of AI-driven solutions. ... In industries such as Banking, Financial Services, Insurance, and Capital Markets, the importance of Agentic AI governance cannot be overstated. These sectors deal with vast amounts of sensitive data and require high levels of accuracy, security, and compliance. Here’s why Agentic AI governance is essential: Enhanced Trust: Proper governance fosters trust among stakeholders by ensuring AI systems are transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and regulatory requirements helps avoid penalties and safeguard against legal risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI governance enhances overall operational efficiency and decision-making. Protection of Sensitive Data: Robust governance frameworks protect sensitive financial data from breaches and misuse, ensuring privacy and security. 


Fundamentals of Dimensional Data Modeling

Keeping the dimensions separate from facts makes it easier for analysts to slice-and-dice and filter data to align with the relevant context underlying a business problem. Data modelers organize these facts and descriptive dimensions into separate tables within the data warehouse, aligning them with the different subject areas and business processes. ... Dimensional modeling provides a basis for meaningful analytics gathered from a data warehouse for many reasons. Its processes lead to standardizing dimensions through presenting the data blueprint intuitively. Additionally, dimensional data modeling proves to be flexible as business needs evolve. The data warehouse updates technology according to the concept of slowly changing dimensions (SCD) as business contexts emerge. ... Alignment in the design requires these processes, and data governance plays an integral role in getting there. Once the organization is on the same page about the dimensional model’s design, it chooses the best kind of implementation. Implementation choices include the star or snowflake schema around a fact. When organizations have multiple facts and dimensions, they use a cube. A dimensional model defines how technology needs to build a data warehouse architecture or one of its components using good design and implementation.


IDE Extensions Pose Hidden Risks to Software Supply Chain

The latest research, published this week by application security vendor OX Security, reveals the hidden dangers of verified IDE extensions. While IDEs provide an array of development tools and features, there are a variety of third-party extensions that offer additional capabilities and are available in both official marketplaces and external websites. ... But OX researchers realized they could add functionality to verified extensions after the fact and still maintain the checkmark icon. After analyzing traffic for Visual Studio Code, the researchers found a server request to the marketplace that determines whether the extension is verified; they discovered they could modify the values featured in the server request and maintain the verification status even after creating malicious versions of the approved extensions. ... Using this attack technique, a threat actor could inject malicious code into verified and seemingly safe extensions that would maintain their verified status. "This can result in arbitrary code execution on developers' workstations without their knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote. "Therefore, relying solely on the verified symbol of extensions is inadvisable." ... "It only takes one developer to download one of these extensions," he says. "And we're not talking about lateral movement. ..."


Business Case for Agentic AI SOC Analysts

A key driver behind the business case for agentic AI in the SOC is the acute shortage of skilled security analysts. The global cybersecurity workforce gap is now estimated at 4 million professionals, but the real bottleneck for most organizations is the scarcity of experienced analysts with the expertise to triage, investigate, and respond to modern threats. One ISC2 survey report from 2024 shows that 60% of organizations worldwide reported staff shortages significantly impacting their ability to secure the organizations, with another report from the World Economic Forum showing that just 15% of organizations believe they have the right people with the right skills to properly respond to a cybersecurity incident. Existing teams are stretched thin, often forced to prioritize which alerts to investigate and which to leave unaddressed. As previously mentioned, the flood of false positives in most SOCs means that even the most experienced analysts are too distracted by noise, increasing exposure to business-impacting incidents. Given these realities, simply adding more headcount is neither feasible nor sustainable. Instead, organizations must focus on maximizing the impact of their existing skilled staff. The AI SOC Analyst addresses this by automating routine Tier 1 tasks, filtering out noise, and surfacing the alerts that truly require human judgment. 


Microservice Madness: Debunking Myths and Exposing Pitfalls

Microservices will reduce dependencies, because it forces you to serialize your types into generic graph objects (read; JSON or XML or something similar). This implies that you can just transform your classes into a generic graph object at its interface edges, and accomplish the exact same thing. ... There are valid arguments for using message brokers, and there are valid arguments for decoupling dependencies. There are even valid points of scaling out horizontally by segregating functionality on to different servers. But if your argument in favor of using microservices is "because it eliminates dependencies," you're either crazy, corrupt through to the bone, or you have absolutely no idea what you're talking about (make your pick!) Because you can easily achieve the same amount of decoupling using Active Events and Slots, combined with a generic graph object, in-process, and it will execute 2 billion times faster in production than your "microservice solution" ... "Microservice Architecture" and "Service Oriented Architecture" (SOA) have probably caused more harm to our industry than the financial crisis in 2008 caused to our economy. And the funny thing is, the damage is ongoing because of people repeating mindless superstitious belief systems as if they were the truth.


Sustainability and social responsibility

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage the entire thermal load. That’s why hybrid systems that combine liquid and traditional air cooling are increasingly popular. These systems offer the ability to fine-tune energy use, reduce reliance on mechanical cooling, and optimize server performance. HiRef offers advanced cooling distribution units (CDUs) that integrate liquid-cooled servers with heat exchangers and support infrastructure like dry coolers and dedicated high-temperature chillers. This integration ensures seamless heat management regardless of local climate or load fluctuations. ... With liquid cooling systems capable of operating at higher temperatures, facilities can increasingly rely on external conditions for passive cooling. This shift not only reduces electricity usage, but also allows for significant operational cost savings over time. But this sustainable future also depends on regulatory compliance, particularly in light of the recently updated F-Gas Regulation, which took effect in March 2024. The EU regulation aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t directly tailored to the data center sector,” explains Poletto.


Infrastructure Operators Leaving Control Systems Exposed

Threat intelligence firm Censys has scanned the internet twice a month for the last six months, looking for a representative sample composed of four widely used types of ICS devices publicly exposed to the internet. Overall exposure slightly increased from January through June, the firm said Monday. One of the devices Censys scanned for is programmable logic controllers made by an Israel-based Unitronics. The firm's Vision-series devices get used in numerous industries, including the water and wastewater sector. Researchers also counted publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco Veeder-Root - that run SiteOmat fuel station automation software. It also looked for devices made by Red Lion that are widely deployed for factory and process automation, as well as in oil and gas environments. It additionally probed for instances of a facilities automation software framework known as Niagara, made by Tridium. ... Report author Emily Austin, principal security researcher at Censys, said some fluctuation over time isn't unusual, given how "services on the internet are often ephemeral by nature." The greatest number of publicly exposed systems were in the United States, except for Unitronics, which are also widely used in Australia.


Healthcare CISOs must secure more than what’s regulated

Security must be embedded early and consistently throughout the development lifecycle, and that requires cross-functional alignment and leadership support. Without an understanding of how regulations translate into practical, actionable security controls, CISOs can struggle to achieve traction within fast-paced development environments. ... Security objectives should be mapped to these respective cycles—addressing tactical issues like vulnerability remediation during sprints, while using PI planning cycles to address larger technical and security debt. It’s also critical to position security as an enabler of business continuity and trust, rather than a blocker. Embedding security into existing workflows rather than bolting it on later builds goodwill and ensures more sustainable adoption. ... The key is intentional consolidation. We prioritize tools that serve multiple use cases and are extensible across both DevOps and security functions. For example, choosing solutions that can support infrastructure-as-code security scanning, cloud posture management, and application vulnerability detection within the same ecosystem. Standardizing tools across development and operations not only reduces overhead but also makes it easier to train teams, integrate workflows, and gain unified visibility into risk.

Daily Tech Digest - August 17, 2024

The importance of connectivity in IoT

There is no point in having IoT if the connectivity is weak. Without reliable connectivity, the data from sensors and devices, which are intended to be collected and analysed in real-time, might end up being delayed when they are eventually delivered. In healthcare, in real-time, connected devices monitor the vital signs of the patient in an intensive-care ward and alert the physician to any observations that are outside of the specified limits. ...  The future evolution of connectivity technologies will combine with IoT to significantly expand its capabilities. The arrival of 5G will enable high-speed, low-latency connections. This transition will usher in IoT systems that were previously impossible, such as self-driving vehicles that instantaneously analyse vehicle states and provide real-time collision avoidance. The evolution of edge computing will bring data-processing closer to the edge (the IoT devices), thereby significantly reducing latency and bandwidth costs. Connectivity underpins almost everything we see as important with IoT – the data exchange, real-time usage, scale and interoperability we access in our systems.


Aren’t We Transformed Yet? Why Digital Transformation Needs More Work

When it comes to enterprise development, platforms alone can’t address the critical challenge of maintaining consistency between development, test, staging, and production environments. What teams really need to strive for is a seamless propagation of changes between environments made production-like through synchronization and have full control over the process. This control enables the integration of crucial safety steps such as approvals, scans, and automated testing, ensuring that issues are caught and addressed early in the development cycle. Many enterprises are implementing real-time visualization capabilities to provide administrators and developers with immediate insight into differences between instances, including scoped apps, store apps, plugins, update sets, and even versions across the entire landscape. This extended visibility is invaluable for quickly identifying and resolving discrepancies before they can cause problems in production environments. A lack of focus on achieving real-time multi-environment visibility is akin to performing a medical procedure without an X-ray, CT, or MRI of the patient. 


Why Staging Doesn’t Scale for Microservice Testing

So are we doomed to live in a world where staging is eternally broken? As we’ve seen, traditional approaches to staging environments are fraught with challenges. To overcome these, we need to think differently. This brings us to a promising new approach: canary-style testing in shared environments. This method allows developers to test their changes in isolation within a shared staging environment. It works by creating a “shadow” deployment of the services affected by a developer’s changes while leaving the rest of the environment untouched. This approach is similar to canary deployments in production but applied to the staging environment. The key benefit is that developers can share an environment without affecting each other’s work. When a developer wants to test a change, the system creates a unique path through the environment that includes their modified services, while using the existing versions of all other services. Moreover, this approach enables testing at the granularity of every code change or pull request. This means developers can catch issues very early in the development process, often before the code is merged into the main branch. 


A world-first law in Europe is targeting artificial intelligence. Other countries can learn from it

The act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by by law enforcement authorities, similar to those currently used in China. Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these aren’t prohibited, they must comply with many requirements. ... The EU is not alone in taking action to tame the AI revolution. Earlier this year the Council of Europe, an international human rights organisation with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law. Canada is also discussing the AI and Data Bill. Like the EU laws, this will set rules to various AI systems, depending on their risks. Instead of a single law, the US government recently proposed a number of different laws addressing different AI systems in various sectors. ... The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies.


Building constructive partnerships to drive digital transformation

The finance team needs to have a ‘seat at the table’ from the very beginning to overcome these challenges and effect successful transformation. Too often, finance only becomes involved when it comes to the cost and financing of the project, and when finance leaders do try to become involved, they can have difficulty gaining access to the needed data. This was recently confirmed by members of the Future of Finance Leadership Advisory Group, where almost half of the group polled (47%) noted challenges gaining access to needed data. As finance professionals understand the needs of stakeholders within the business, they are in the best position to outline what is needed for IT to create an effective, efficient structure. Finance professionals are in-house consultants who collaborate with other functions to understand their workings and end-to-end procedures, discover where both problems and opportunities exist, identify where processes can be improved, and ultimately find solutions. Digital transformation projects rely on harmonizing processes and standardizing systems across different operations. 


DevSecOps: Integrating Security Into the DevOps Lifecycle

The core of DevSecOps is ‘security as code’, a principle that dictates embedding security into the software development process. To keep every release tight on security, we weave those practices into the heart of our CI/CD flow. Automation is key here, as it smooths out the whole security gig in our dev process, ensuring we are safe from the get-go without slowing us down. A shared responsibility model is another pillar of DevSecOps. Security is no longer the sole domain of a separate security team but a shared concern across all teams involved in the development lifecycle. Working together, security isn’t just slapped on at the end but baked into every step from start to finish. ... Adopting DevSecOps is not without its challenges. Shifting to DevSecOps means we’ve got to knock down the walls that have long kept our devs, ops and security folks in separate corners. Balancing the need for rapid deployment with security considerations can be challenging. To nail DevSecOps, teams must level up their skills through targeted training. Weaving together seasoned systems with cutting-edge DevSecOps tactics calls for a sharp, strategic approach. 


Critical Android Vulnerability Impacting Millions of Pixel Devices Worldwide

This backdoor vulnerability, undetectable by standard security measures, allows unauthorized remote code execution, enabling cybercriminals to compromise devices without user intervention or knowledge due to the app’s privileged system-level status and inability to be uninstalled. The Showcase.apk application possesses excessive system-level privileges, enabling it to fundamentally alter the phone’s operating system despite performing a function that does not necessitate such high permissions. An application’s configuration file retrieval lacks essential security measures, such as domain verification, potentially exposing the device to unauthorized modifications and malicious code execution through compromised configuration parameters. The application suffers from multiple security vulnerabilities. Insecure default variable initialization during certificate and signature verification allows bypass of validation checks. Configuration file tampering risks compromise, while the application’s reliance on bundled public keys, signatures, and certificates creates a bypass vector for verification.


Using Artificial Intelligence in surgery and drug discovery

“We’re seeing how AI is adapting, learning, and starting to give us more suggestions and even take on some independent tasks. This development is particularly thrilling because it spans across diagnostics, therapeutics, and theranostics—covering a wide range of medical areas. We’re on the brink of AI and robotics merging together in a very meaningful way,” Dr Rao said. However, he said he would like to add a word of caution. He said he often tells junior enthusiasts who are eager to use AI in everything: AI is not a replacement for natural stupidity. ... He said that one of the most impressive applications of this AI was during the preparation of a US FDA application, which is typically a very cumbersome and expensive process. “At that point, I’d already completed the preclinical phase but wasn’t certain about the additional 20-30 tests I might need. Instead of spending hundreds of thousands of dollars on trial and error, we fed all our data into this AI system. Now, it’s important to note that pharma companies are usually reluctant to share their proprietary data, so gathering information is often a challenge,” he said.  


Mastercard Is Betting on Crypto—But Not Stablecoins

“We’re opening up this crypto purchase power to our 100 million-plus acceptance locations,” Raj Dhamodharan, Mastercard's head of crypto and blockchain, told Decrypt. “If consumers want to buy into it, if they want to be able to use it, we want to enable that—in a safe way.” Perhaps in the name of safety, the new MetaMask Card isn’t compatible with most cryptocurrencies. You can’t use it to buy a plane ticket with Pepecoin, or a sandwich with SHIB. The card is only compatible with dominant stablecoins USDT and USDC, as well as wrapped Ethereum. ... Dhamodharan and his team are currently endeavoring to create an alternative system to stablecoins that—instead of putting crypto companies like Circle and Tether in the catbird seat of the new digital economy—keeps payment services like Mastercard, and traditional banks, at center. Key to this plan is unlocking the potential of bank deposits, which already exist on digital ledgers—just not ones that live on-chain. Dhamodharan estimates that some $15 trillion worth of digital bank deposits currently exist in the United States alone.


A Group Linked To Ransomhub Operation Employs EDR-Killing Tool

Experts believe RansomHub is a rebrand of the Knight ransomware. Knight, also known as Cyclops 2.0, appeared in the threat landscape in May 2023. The malware targets multiple platforms, including Windows, Linux, macOS, ESXi, and Android. The operators used a double extortion model for their RaaS operation. Knight ransomware-as-a-service operation shut down in February 2024, and the malware’s source code was likely sold to the threat actor who relaunched the RansomHub operation. ... “One main difference between the two ransomware families is the commands run through cmd.exe. While the specific commands may vary, they can be configured either when the payload is built or during configuration. Despite the differences in commands, the sequence and method of their execution relative to other operations remain the same.” states the report published by Symantec. Although RansomHub only emerged in February 2024, it has rapidly grown and, over the past three months, has become the fourth most prolific ransomware operator based on the number of publicly claimed attacks.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney

Daily Tech Digest - July 17, 2024

Optimization Techniques For Edge AI

Edge devices often have limited computational power, memory, and storage compared to centralised servers. Due to this, the cloud-centric ML models need to be retargeted so that they fit in the available resource budget. Further, many edge devices run on batteries, making energy efficiency a critical consideration. The hardware diversity in edge devices ranging from microcontrollers to powerful edge servers, each with different capabilities and architectures requires different model refinement and retargeting strategies. ... Many use cases involve the distributed deployment of numerous IoT or edge devices, such as CCTV cameras, working collaboratively towards specific objectives. These applications often have built-in redundancy, making them tolerant to failures, malfunctions, or less accurate inference results from a subset of edge devices. Algorithms can be employed to recover from missing, incorrect, or less accurate inputs by utilising the global information available. This approach allows for the combination of high and low accuracy models to optimise resource costs while maintaining the required global accuracy through the available redundancy.


The Cyber Resilience Act: A New Era for Mobile App Developers

Collaboration is key for mobile app developers to prepare for the CRA. They should first conduct a thorough security audit of their apps, identifying and addressing any vulnerabilities. Then, they’ll want to implement a structured plan to integrate the needed security features, based on the CRA’s checklist. It may also make sense to invest in a partnership with cybersecurity experts who can more efficiently provide more insights and help streamline this process in general. Developers cannot be expected to become top-notch security experts overnight. Working with cybersecurity firms, legal advisors and compliance experts can clarify the CRA and simplify the path to compliance and provide critical insights into best practices, regulatory jargon and tech solutions, ensuring that apps meet CRA standards and maintain innovation. It’s also important to note that keeping comprehensive records of compliance efforts is essential under the CRA. Developers should establish a clear process for documenting security measures, vulnerabilities addressed, and any breaches or other incidents that were identified and remediated. 


Sometimes the cybersecurity tech industry is its own worst enemy

One of the fundamental infosec problems facing most organizations is that strong cybersecurity depends on an army of disconnected tools and technologies. That’s nothing new — we’ve been talking about this for years. But it’s still omnipresent. ... To a large enterprise, “platform” is a code word for vendor lock-in, something organizations tend to avoid. Okay, but let’s say an organization was platform curious. It could also take many months or years for a large organization to migrate from distributed tools to a central platform. Given this, platform vendors need to convince a lot of different people that the effort will be worth it — a tall task with skeptical cybersecurity professionals. ... Fear not, for the security technology industry has another arrow in its quiver — application programming interfaces (APIs). Disparate technologies can interoperate by connecting via their APIs, thus cybersecurity harmony reigns supreme, right? Wrong! In theory, API connectivity sounds good, but it is extremely limited in practice. For it to work well, vendors have to open their APIs to other vendors. 


How to Apply Microservice Architecture to Embedded Systems

In short, the process of deploying and upgrading microservices for an embedded system has a strong dependency on the physical state of the system’s hardware. But there’s another significant constraint as well: data exchange. Data exchange between embedded devices is best implemented using a binary data format. Space and bandwidth capacity are limited in an embedded processor, so text-based formats such as XML and JSON won’t work well. Rather, a binary format such as protocol buffers or a custom binary format is better suited for communication in an MOA scenario in which each microservice in the architecture is hosted on an embedded processor. ... Many traditional distributed applications can operate without each microservice in the application being immediately aware of the overall state of the application. However, knowing the system’s overall state is important for microservices running within an embedded system. ... The important thing to understand is that any embedded system will need a routing mechanism to coordinate traffic and data exchange among the various devices that make up the system.


How to assess a general-purpose AI model’s reliability before it’s deployed

But these models, which serve as the backbone for powerful artificial intelligence tools like ChatGPT and DALL-E, can offer up incorrect or misleading information. In a safety-critical situation, such as a pedestrian approaching a self-driving car, these mistakes could have serious consequences. To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task. They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable. When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks. Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. 


The Role of Technology in Modern Product Engineering

Product engineering has seen a significant transformation with the integration of advanced technologies. Tools like Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), and Computer-Aided Engineering (CAE) have paved the way for more efficient and precise engineering processes. The early adoption of these technologies has enabled businesses to develop multi-million dollar operations, demonstrating the profound impact of technological advancements in the field. ... Deploying complex software solutions often involves customization and integration challenges. Addressing these challenges requires close client engagement, offering configurable options, and implementing phased customization. ... The future of product engineering is being shaped by technology integration, strategic geographic diversification, and the adoption of advanced methodologies like DevSecOps. As the tech landscape evolves with trends such as AI, Augmented Reality (AR), Virtual Reality (VR), IoT, and sustainable technology, continuous innovation and adaptation are essential.


A New Approach To Multicloud For The AI Era

The evolution from cost-focused to value-driven multicloud strategies marks a significant shift. Investing in multicloud is not just about cost efficiency; it's about creating an infrastructure that advances AI initiatives, spurs innovation and secures a competitive advantage. Unlike single-cloud or hybrid approaches, multicloud offers unparalleled adaptability and resource diversity, which are essential in the AI-driven business environment. Here are a few factors to consider. ... The challenge of multicloud is not simply to utilize a variety of cloud services but to do so in a way that each contributes its best features without compromising the overall efficiency and security of the AI infrastructure. To achieve this, businesses must first identify the unique strengths and offerings of each cloud provider. For instance, one platform might offer superior data analytics tools, another might excel in machine learning performance and a third might provide the most robust security features. The task is to integrate these disparate elements into a seamless whole. 


How Can Organisations Stay Secure In The Face Of Increasingly Powerful AI Attacks

One of the first steps any organisation should take when it comes to staying secure in the face of AI-generated attacks is to acknowledge a significant top-down disparity between the volume and strength of cyberattacks, and the ability of most organisations to handle them. Our latest report shows that just 58% of companies are addressing every security alert. Without the right defences in place, the growing power of AI as a cybersecurity threat could see that number slip even lower. ... Fortunately, there is a solution: low-code security automation. This technology gives security teams the power to automate tedious and manual tasks, allowing them to focus on establishing an advanced threat defence. ... There are other benefits too. These include the ability to scale implementations based on the team’s existing experience and with less reliance on coding skills. And unlike no-code tools that can be useful for smaller organisations that are severely resource-constrained, low-code platforms are more robust and customisable. This can result in easier adaptation to the needs of the business.


Time for reality check on AI in software testing

Given that AI-augmented testing tools are derived from data used to train AI models, IT leaders will also be more responsible for the security and privacy of that data. Compliance with regulations like GDPR is essential, and robust data governance practices should be implemented to mitigate the risk of data breaches or unauthorized access. Algorithmic bias introduced by skewed or unrepresentative training data must also be addressed to mitigate bias within AI-augmented testing as much as possible. But maybe we’re getting ahead of ourselves here. Because even with AI’s continuing evolution, and autonomous testing becomes more commonplace, we will still need human assistance and validation. The interpretation of AI-generated results and the ability to make informed decisions based on those results will remain a responsibility of testers. AI will change software testing for the better. But don’t treat any tool using AI as a straight-up upgrade. They all have different merits within the software development life cycle. 


Overlooked essentials: API security best practices

In my experience, there are six important indicators organizations should focus on to detect and respond to API security threats effectively – shadow APIs, APIs exposed to the internet, APIs handling sensitive data, unauthenticated APIs, APIs with authorization flaws, APIs with improper rate limiting. Let me expand on this further. Shadow APIs: Firstly, it’s important to identify and monitor shadow APIs. These are undocumented or unmanaged APIs that can pose significant security risks. Internet-exposed APIs: Limit and closely track the number of APIs accessible publicly. These are more prone to external threats. APIs handling sensitive data: APIs that process sensitive data and are also publicly accessible are among the most vulnerable. They should be prioritized for security measures. Unauthenticated APIs: An API lacking proper authentication is an open invitation to threats. Always have a catalog of unauthenticated APIs and ensure they are not vulnerable to data leaks. APIs with authorization flaws: Maintain an inventory of APIs with authorization vulnerabilities. These APIs are susceptible to unauthorized access and misuse. Implement a process to fix these vulnerabilities as a priority.



Quote for the day:

"The successful man doesn't use others. Other people use the successful man. For above all the success is of service" -- Mark Kainee

Daily Tech Digest - March 18, 2024

Generative AI will turn cybercriminals into better con artists. AI will help attackers to craft well-written, convincing phishing emails and websites in different languages, enabling them to widen the nets of their campaigns across locales. We expect to see the quality of social engineering attacks improve, making lures more difficult for targets and security teams to spot. As a result, we may see an increase in the risks and harms associated with social engineering – from fraud to network intrusions. ... AI is driving the democratisation of technology by helping less skilled users to carry out more complex tasks more efficiently. But while AI improves organisations’ defensive capabilities, it also has the potential for helping malicious actors carry out attacks against lower system layers, namely firmware and hardware, where attack efforts have been on the rise in recent years. Historically, such attacks required extensive technical expertise, but AI is beginning to show promise to lower these barriers. This could lead to more efforts to exploit systems at the lower level, giving attackers a foothold below the operating system and the industry’s best software security defences.


Get the Value Out of Your Data

A robust data strategy should have clearly defined outcomes and measurements in place to trace the value it delivers. However, it is important to acknowledge the need for flexibility during the strategic and operational phases. Consequently, defining deliverables becomes crucial to ensure transparency in the delivery process. To achieve this, adopting a data product approach focused on iteratively delivering value to your organization is recommended. The evolution of DevOps, supported by cloud platform technology, has significantly improved the software engineering delivery process by automating development and operational routines. Now, we are witnessing a similar agile evolution in the data management area with the emergence of DataOps. DataOps aims to enhance the speed and quality of data delivery, foster collaboration between IT and business teams, and reduce the associated time and costs. By providing a unified view of data across the organization, DataOps enables faster and more confident data-driven decision-making, ensuring data accuracy, up-to-datedness, and security. It automates and brings transparency to the measurements required for agile delivery through data product management.


Exposure to new workplace technologies linked to lower quality of life

Part of the problem is that IT workers need to stay updated with the newest tech trends and figure out how to use them at work, said Ryan Smith, founder of the tech firm QFunction, also unconnected with the study. The hard part is that new tech keeps coming in, and workers have to learn it, set it up, and help others use it quickly, he said. “With the rise of AI and machine learning and the uncertainty around it, being asked to come up to speed with it and how to best utilize it so quickly, all while having to support your other numerous IT tasks, is exhausting,” he added. “On top of this, the constant fear of layoffs in the job market forces IT workers to keep up with the latest technology trends in order to stay employable, which can negatively affect their quality of life.” ... “As IT has become the backbone of many businesses, that backbone is key to the businesses operations, and in most cases revenue,” he added. “That means it’s key to the business’s survival. IT teams now must be accessible 24 hours a day. In the face of a problem, they are expected to work 24 hours a day to resolve it. ...”


6 best operating systems for Raspberry Pi 5

Even though it has been nearly seven years since Microsoft debuted Windows on Arm, there has been a noticeable lack of ARM-powered laptops. The situation is even worse for SBCs like the Raspberry Pi, which aren’t even on Microsoft’s radar. Luckily, the talented team at WoR project managed to find a way to install Windows 11 on Raspberry Pi boards. ... Finally, we have the Raspberry Pi OS, which has been developed specifically for the RPi boards. Since its debut in 2012, the Raspberry Pi OS (formerly Raspbian) has become the operating system of choice for many RPi board users. Since it was hand-crafted for the Raspberry Pi SBCs, it’s faster than Ubuntu and light years ahead of Windows 11 in terms of performance. Moreover, most projects tend to favor Raspberry Pi OS over the alternatives. So, it’s possible to run into compatibility and stability issues if you attempt to use any other operating system when attempting to replicate the projects created by the lively Raspberry Pi community. You won’t be disappointed with the Raspberry Pi OS if you prefer a more minimalist UI. That said, despite including pretty much everything you need to use to make the most of your RPi SBC, the Raspberry Pi OS isn't as user-friendly as Ubuntu.


Speaking without vocal cords, thanks to a new AI-assisted wearable device

The breakthrough is the latest in Chen's efforts to help those with disabilities. His team previously developed a wearable glove capable of translating American Sign Language into English speech in real time to help users of ASL communicate with those who don't know how to sign. The tiny new patch-like device is made up of two components. One, a self-powered sensing component, detects and converts signals generated by muscle movements into high-fidelity, analyzable electrical signals; these electrical signals are then translated into speech signals using a machine-learning algorithm. The other, an actuation component, turns those speech signals into the desired voice expression. The two components each contain two layers: a layer of biocompatible silicone compound polydimethylsiloxane, or PDMS, with elastic properties, and a magnetic induction layer made of copper induction coils. Sandwiched between the two components is a fifth layer containing PDMS mixed with micromagnets, which generates a magnetic field. Utilizing a soft magnetoelastic sensing mechanism developed by Chen's team in 2021, the device is capable of detecting changes in the magnetic field when it is altered as a result of mechanical forces—in this case, the movement of laryngeal muscles.


We can’t close the digital divide alone, says Cisco HR head as she discusses growth initiatives

At Cisco, we follow a strengths-based approach to learning and development, wherein our quarterly development discussions extend beyond performance evaluations to uplifting ourselves and our teams. We understand that a one-size-fits-all approach is inadequate. To best play to our employees' strengths, we have to be flexible, adaptable, and open to what works best for each individual and team. This enables us to understand individual employees' unique learning needs, enabling us to tailor personalised programs that encompass diverse learning options such as online courses, workshops, mentoring, and gamified experiences, catering to diverse learning styles. As a result, our employees are energized to pursue their passions, contributing their best selves to the workplace. Measuring the quality of work, internal movements, employee retention, patents, and innovation, along with engagement pulse assessments, allows us to gauge the effectiveness of our programs. When it comes to addressing the challenge of retaining talent, it's essential for HR leaders to consider a holistic approach. 


Vector databases: Shiny object syndrome and the case of a missing unicorn

What’s up with vector databases, anyway? They’re all about information retrieval, but let’s be real, that’s nothing new, even though it may feel like it with all the hype around it. We’ve got SQL databases, NoSQL databases, full-text search apps and vector libraries already tackling that job. Sure, vector databases offer semantic retrieval, which is great, but SQL databases like Singlestore and Postgres (with the pgvector extension) can handle semantic retrieval too, all while providing standard DB features like ACID. Full-text search applications like Apache Solr, Elasticsearch and OpenSearch also rock the vector search scene, along with search products like Coveo, and bring some serious text-processing capabilities for hybrid searching. But here’s the thing about vector databases: They’re kind of stuck in the middle. ... It wasn’t that early either — Weaviate, Vespa and Mivlus were already around with their vector DB offerings, and Elasticsearch, OpenSearch and Solr were ready around the same time. When technology isn’t your differentiator, opt for hype. Pinecone’s $100 million Series B funding was led by Andreessen Horowitz, which in many ways is living by the playbook it created for the boom times in tech.


The Role of Quantum Computing in Data Science

Despite its potential, the transition to quantum computing presents several significant challenges to overcome. Quantum computers are highly sensitive to their environment, with qubit states easily disturbed by external influences – a problem known as quantum decoherence. This sensitivity requires that quantum computers be kept in highly controlled conditions, which can be expensive and technologically demanding. Moreover, concerns about the future cost implications of quantum computing on software and services are emerging. Ultimately, the prices will be sky-high, and we might be forced to search for AWS alternatives, especially if they raise their prices due to the introduction of quantum features, as it’s the case with Microsoft banking everything on AI. This raises the question of how quantum computing will alter the prices and features of both consumer and enterprise software and services, further highlighting the need for a careful balance between innovation and accessibility. There’s also a steep learning curve for data scientists to adapt to quantum computing.


AI-Driven API and Microservice Architecture Design for Cloud

Implementing AI-based continuous optimization for APIs and microservices in Azure involves using artificial intelligence to dynamically improve performance, efficiency, and user experience over time. Here's how you can achieve continuous optimization with AI in Azure:Performance monitoring: Implement AI-powered monitoring tools to continuously track key performance metrics such as response times, error rates, and resource utilization for APIs and microservices in real time. Automated tuning: Utilize machine learning algorithms to analyze performance data and automatically adjust configuration settings, such as resource allocation, caching strategies, or database queries, to optimize performance. Dynamic scaling: Leverage AI-driven scaling mechanisms to adjust the number of instances hosting APIs and microservices based on real-time demand and predicted workload trends, ensuring efficient resource allocation and responsiveness. Cost optimization: Use AI algorithms to analyze cost patterns and resource utilization data to identify opportunities for cost savings, such as optimizing resource allocation, implementing serverless architectures, or leveraging reserved instances.


4 ways AI is contributing to bias in the workplace

Generative AI tools are often used to screen and rank candidates, create resumes and cover letters, and summarize several files simultaneously. But AIs are only as good as the data they're trained on. GPT-3.5 was trained on massive amounts of widely available information online, including books, articles, and social media. Access to this online data will inevitably reflect societal inequities and historical biases, as shown in the training data, which the AI bot inherits and replicates to some degree. No one using AI should assume these tools are inherently objective because they're trained on large amounts of data from different sources. While generative AI bots can be useful, we should not underestimate the risk of bias in an automated hiring process -- and that reality is crucial for recruiters, HR professionals, and managers. Another study found racial bias is present in facial-recognition technologies that show lower accuracy rates for dark-skinned individuals. Something as simple as data for demographic distributions in ZIP codes being used to train AI models, for example, can result in decisions that disproportionately affect people from certain racial backgrounds.



Quote for the day:

"The most common way people give up their power is by thinking they don't have any." -- Alice Walker

Daily Tech Digest - December 29, 2023

5 Ways That AI Is Set To Transform Cybersecurity

Cybersecurity has long been notoriously siloed, with organizations installing many different tools and products, often poorly interconnected. No matter how hard vendors and organizations work to integrate tools, coalescing all relevant cybersecurity information into one place remains a big challenge. But AI offers a way to combine multiple data sets from many disparate sources and provide a truly unified view of an organization’s security posture, with actionable insights. And with generative AI, gaining those insights is so easy, a matter of simply asking the system questions such as “What are the top three things I could do today to reduce risk?” or “What would be the best way to respond to this incident report?” AI has the potential to consolidate security feeds in a way the industry has never been able to quite figure out. Generative AI will blow up the very nature of data infrastructure. Think about it: All the different tools that organizations use to store and manage data are built for humans. Essentially, they’re designed to segment information and put it in various electronic boxes for people to retrieve later. It’s a model based on how the human mind works.


Microservices Resilient Testing Framework

Resilience in microservices refers to the system's ability to handle and recover from failures, continue operating under adverse conditions, and maintain functionality despite challenges like network latency, high traffic, or the failure of individual service components. Microservices architectures are distributed by nature, often involving multiple, loosely coupled services that communicate over a network. This distribution often increases the system's exposure to potential points of failure, making resilience a critical factor. A resilient microservices system can gracefully handle partial failures, prevent them from cascading through the system, and ensure overall system stability and reliability. For resilience, it is important to think in terms of positive and negative testing scenarios. The right combination of positive and negative testing plays a crucial role in achieving this resilience, allowing teams to anticipate and prepare for a range of scenarios and maintaining a robust, stable, and trustworthy system. For this reason, the rest of the article will be focusing on negative and positive scenarios for all our testing activities.


Skynet Ahoy? What to Expect for Next-Gen AI Security Risks

From a cyberattack perspective, threat actors already have found myriad ways to weaponize ChatGPT and other AI systems. One way has been to use the models to create sophisticated business email compromise (BEC) and other phishing attacks, which require the creation of socially engineered, personalized messages designed for success. "With malware, ChatGPT enables cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines," Harr says. AI hallucinations also pose a significant security threat and allow malicious actors to arm LLM-based technology like ChatGPT in a unique way. An AI hallucination is a plausible response by the AI that's insufficient, biased, or flat-out not true. "Fictional or other unwanted responses can steer organizations into faulty decision-making, processes, and misleading communications," warns Avivah Litan, a Gartner vice president. Threat actors also can use these hallucinations to poison LLMs and "generate specific misinformation in response to a question," observes Michael Rinehart, vice president of AI at data security provider Securiti. 


Cybersecurity teams need new skills even as they struggle to manage legacy systems

To stay ahead, though, security leaders should incorporate prompt engineering training for their team, so they can better understand how generative AI prompts function, the analyst said. She also underscored the need for penetration testers and red teams to include prompt-driven engagements in their assessment of solutions powered by generative AI and large language models. They need to develop offensive AI security skills to ensure models are not tainted or stolen by cybercriminals seeking intellectual property. They also have to ensure sensitive data used to train these models are not exposed or leaked, she said. In addition to the ability to write more convincing phishing email, generative AI tools can be manipulated to write malware despite limitations put in place to prevent this, noted Jeremy Pizzala, EY's Asia-Pacific cybersecurity consulting leader. He noted that researchers, including himself, have been able to circumvent ethical restrictions that guide platforms such as ChatGPT and prompt them to write malware.


The relationship between cloud FinOps and security

Established FinOps and cybersecurity teams should annually evaluate their working relationship as part of continuous improvement. This collaboration helps ensure that, as practices and tools evolve, the correct FinOps data is available to cybersecurity teams as part of their monitoring, incident response and post-incident forensics. The FinOps Foundation doesn't mention cybersecurity in its FinOps Maturity Model. But, in all rights, FinOps and cybersecurity collaboration indicates a maturing organization in the model's Run phase. Ideally, moves to establish such collaboration should show themselves in the Walk stage. ... Building a relationship between the FinOps and cybersecurity teams should start early when an organization chooses a FinOps tool. A FinOps team can better forecast expenses, plan budget allocation and avoid unnecessary costs by understanding security requirements and constraints. These forecasts result in a more cost-effective and financially efficient cloud operation, so plan for some level of cross-training between the teams.


What is GRC? The rising importance of governance, risk, and compliance

Like other parts of enterprise operations, GRC comprises a mix of people, process, and technology. To implement an effective GRC program, enterprise leaders must first understand their business, its mission, and its objectives, according to Ameet Jugnauth, the ISACA London Chapter board vice president and a member of the ISACA Emerging Trends Working Group. Executives then must identify the legal and regulatory requirements the organization must meet and establish the organization’s risk profile based on the environment in which it operates, he says. “Understand the business, your business environment (internal and external), your risk appetite, and what the government wants you to achieve. That all sets your GRC,” he adds. The roles that lead these activities vary from one organization to the next. Midsize to large organizations typically have C-level executives — namely a chief governance officer, chief risk officer, and chief compliance officer — to oversee these tasks, McKee says. These executive lead risk or compliance departments with dedicated teams.


Revolutionising Fraud Detection: The Role of AI in Safeguarding Financial Systems

Conventional fraud detection methods, primarily rule-based systems, and human analysis, have proven increasingly inadequate in the face of evolving fraud tactics. Rule-based systems, while effective in identifying simple patterns, often struggle to adapt to the ever-changing landscape of fraud. Fraudsters have stronger motivation and they evolve faster than the rules in the rules engine. ... The same volumes of data that are overwhelming for traditional fraud detection systems are fuel for AI. With its ability to learn from vast amounts of data and identify complex patterns, AI is poised to revolutionize the fight against fraud. ... While AI offers immense potential, it’s crucial to acknowledge the challenges associated with its adoption. Data privacy concerns, ethical considerations around algorithmic bias, and the need for robust security measures are all critical aspects that demand careful attention. As AI opens new frontiers in fraud prevention, unregulated AI technology such as deepfake in the wrong hands could also enable sophisticated impersonation scams. However, the benefits of AI far outweigh the challenges. 


API security in 2024: Predictions and trends

The rapid rate of change of APIs means organizations will always have vulnerabilities that need to be remediated. As a result, 2024 will usher in a new era where visibility will be a priority for API security strategies. Preventing attackers from entering the perimeter is not a 100% foolproof strategy. Whereas having real-time visibility into a security environment will enable rapid responses from security teams that neutralize threats before they impact operations or extract valuable data. ... With the widespread use of APIs, especially in sectors such as financial services, regulators are looking to encourage transparency in APIs. This means data privacy concerns and regulations will continue to impact API use in 2024. In response, organizations are becoming weary of having third parties hold and access their data to conduct security analyses. We expect to see a shift in 2024 where organizations will demand running security solutions locally within their own environments. Self-managed solutions (either on-premise or private cloud), eliminate the need to filter, redact, and anonymize data before it’s stored.


The Terrapin Attack: A New Threat to SSH Integrity

Microsoft’s logic is that the impact on Win32-OpenSSH is limited This is a major mistake. Microsoft’s decision allows unknown server-side implementation bugs to remain exploitable in a Terrapin-like attack, even if the server got patched to support “strict kex.” As one Windows user noted, “This puts Microsoft customers at risk of avoidable Terrapin-style attacks targeting implementation flaws of the server.” Exactly so. You see, for this protection to be effective, both client and server must be patched. If one or the other is vulnerable, the entire connection can still be attacked. So to be safe, you must patch and update both your client and server SSH software. So, if you’re Windows and you haven’t manually updated your workstations, their connections are open to attack. While patches and updates are being released, the widespread nature of this vulnerability means that it will take time for all clients and servers to be updated. Because you must already have an MITM attacker in place to be vulnerable, I wouldn’t go spend the holiday season worrying myself sick. I mean, you’re sure you don’t already have a hacker inside your system, right? Right!?


Supporting Privacy, Security and Digital Trust Through Effective Enterprise Data Management Programs

Those professionals responsible for supporting privacy efforts should therefore prioritize effective enterprise data management because it is integral to safeguarding individual’s privacy. A well-structured data management framework works to ensure that personal information is handled ethically and compliant with regulations, while fostering a culture of responsible data stewardship within organizations. When done right, this reinforces trust with stakeholders, serves as a differentiator in the marketplace, improves visibility into data ecosystems, expands reliability of data, and optimizes scalability and innovative go to market efforts. ... Most, if not all, of the global data privacy laws and regulations require data to be managed effectively. To comply with these laws and regulations, organizations must first understand the data they collect, the purposes for its collection, how it is used, how it is shared, how it is stored, how it is destroyed, and so on. Only after organizations have a full understanding of their data ecosystem can they begin to implement effective controls to both protect data and preserve the ability of the data to achieve intended operational goals.



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown

Daily Tech Digest - December 09, 2023

AI in Biotechnology: The Big Interview with Dr Fred Jordan, Co-Founder of FinalSpark

Of course, the ethical consideration is increased because we are using human cells. From an ethical perspective, what is interesting is that all this wouldn’t be possible without the ISPCs. Ethically, we don’t need to take the brain of a real human being to conduct experiments. ... The ultimate goal is to develop machines with a form of intelligence. We want to create a real function, something useful. Imagine inputting a picture to the organoid, and it responds, recognizing objects like cats or dogs. Right now, we are focusing on one specific function – the significant reduction in energy consumption, potentially millions to billions of times less than digital computers. As a result, one practical application could be cloud computing, where these neuron-based systems consume significantly less energy. This offers an eco-friendly alternative to traditional computing processing. Ultimately, the future of AI in biotechnology holds huge potential for various applications because it’s a completely new way of looking at neurons. It’s like the inventors of the transistor not knowing about the internet.


AI regulatory landscape and the need for board governance

“We all need to have a plan in place, and we need to be thinking about how are you using it and whether it is safe.” She underscored the urgency, noting that journalists are investigating where AI has gone wrong and where it’s discriminating against people. Additionally, there are lawyers who seize potential litigation opportunities against ill-prepared, deep-pocketed organizations. "Good AI hygiene is non-negotiable today, and you must have good oversight and best practices in place," she asserted. Despite a lack of comprehensive Congressional AI legislation, Vogel clarified that AI is not without oversight. Four federal agencies recently committed to ensuring fairness in emerging AI systems. In a recent statement, agency leaders committed to using their enforcement powers if AI perpetuates unlawful bias or discrimination. AI regulatory bills have been proposed by over 30 state legislatures, and the international community is also ramping up efforts. Vogel cited the European Union's AI Act as the AI equivalent of the GDPR bill, which established strict data privacy regulations affecting companies worldwide.


Data Management, Distribution, and Processing for the Next Generation of Networks

Investments in cloud architectures by CSPs span their own resources – but they also extend to third parties; federated cloud architectures are the result. These interconnected cloud assets allow CSPs to extend their reach, share resources and collaborate with other stakeholders to secure desired outcomes. Why do we combine this with edge computing? Because resources at the edge may not be in the CSP’s own domain. Edge systems may be a combination of CSP-owned and other resources that are used in parallel to deliver a particular service. And, regardless of overall pace towards 5G SA, edge computing is now firmly in demand by enterprises (and CSPs), to support a new generation of high-performance and low latency services. This demand won’t only be served by CSPs, however. Many enterprises are seeking to deploy private networks – and the resources required to support their applications may be accessed via federated clouds. This user may not need its own UPF, but it may benefit from one offered by another provider in an adjacent edge location, or delivered by a systems integrator that runs multiple private networks with shared resources, available on demand.


Understanding Each Link of the Cyberattack Impact Chain

There are two ways to assess the cyberattack impact chain: Causes and effects. To build stakeholder support for CSAT, CISOs have to show the board how much damage cyberattacks are capable of causing. Beyond the fact that the average cost of a data breach reached an all-time high of $4.45 million in 2023, there are many other repercussions: Disrupted services and operations, a loss of customer trust and a heightened risk of future attacks. CSAT content must inform employees about the effects of cyberattacks to help them understand the risks companies face. It’s even more important for company leaders and employees to have a firm grasp on the causes of cyberattacks. Cybercriminals are experts at exploiting employees’ psychological vulnerabilities – particularly fear, obedience, craving, opportunity, sociableness, urgency and curiosity – to steal money and credentials, break into secure systems and launch cyberattacks. Consider the MGM attack, which relied on vishing – one of the most effective social engineering tactics, as it allows cybercriminals to impersonate trusted entities to deceive their victims.


Another Cyberattack on Critical Infrastructure and the Outlook on Cyberwarfare

Critical infrastructure attacks, like the one against the water authority in Pennsylvania, have occurred in the wake of the Israel-Hamas war. And geopolitical tension and turmoil expands beyond this conflict. Russia’s invasion of Ukraine has sparked cyberattacks. Chinese cyberattacks against government and industry in Taiwan have increased. “This is just going to be an ongoing part of operating digital systems and operating with the internet,” Dominique Shelton Leipzig, a partner and member of the cybersecurity and data privacy practice at global law firm Mayer Brown, tells InformationWeek. While kinetic weapons are still very much a part of war, cyberattacks are another tool in the arsenal. Successful cyberattacks against critical infrastructure have the potential for widespread devastation. “The landscape of warfare is changing,” says Warner. And the weaponization of artificial intelligence is likely to increase the scale of cyberwarfare. “We have the normal technology that we use for denial-of-service attacks, but imagine being able to do all of that on an even greater scale,” says Shelton Leipzig.


Continuous Testing in the Era of Microservices and Serverless Architectures

Continuous testing is a practice that emphasizes the need for testing at every stage of the software development lifecycle. From unit tests to integration tests and beyond, this approach aims to detect and rectify defects as early as possible, ensuring a high level of software quality. It extends beyond mere bug detection and it encapsulates a holistic approach. While unit tests can scrutinize individual components, integration tests can evaluate the collaboration between diverse modules. The practice allows not only the minimization of defects but also the robustness of the entire system. ... Decomposed testing strategies are key to effective microservices testing. This approach advocates for the examination of each microservice in isolation. It involves a rigorous process of testing individual services to ensure their functionality meets specifications, followed by comprehensive integration testing. This methodical approach not only identifies defects at an early stage but also guarantees seamless communication between services, aligning with the modular nature of microservices.


Understanding Master Data Management’s integration challenges

The integration of data within MDM is a very complex task, which should not be underestimated. Many organizations often have a myriad of source systems, each with its own data structure and format. These systems can range from commercial CRM or ERP systems to custom-built legacy software, all of which may use different data models, definitions, and standards. In addition, organizations often desire real-time or near-real-time synchronization between the MDM system and the source systems. Any changes in the source systems need to be immediately reflected in the MDM system to ensure data accuracy and consistency. Using a native connector from the MDM system to read data from your operational systems can provide several benefits, such as ease of integration. This has been illustrated at the bottom in the image above. However, the choice of using a native connector or a custom-built one mostly depends on your specific needs, the complexity of your data, the systems you’re integrating, and the capabilities of your MDM system.


Aim for a modern data security approach

Beginning with data observability, a “shift left” implementation requires that data security become the linchpin before any application is put into production. Instead of being confined to data quality or data reliability, security needs to become another use case application of the underlying data and be unified into the rest of the data observability subsystem. By doing this, data security benefits from the alerts and notifications stemming from data observability offerings. Data governance platform capabilities typically include business glossaries, catalogs, and data lineage. They also leverage metadata to accelerate and govern analytics. In “shift left” data governance, the same metadata is augmented by data security policies and user access rights to further increase trust and allow appropriate users to access data. Leveraging and establishing comprehensive data observability and governance is the key to data democratization. As a result, these proactive and transparent views over the security of critical data elements will also accelerate application development and improve productivity.


Google expands minimum security guidelines for third-party vendors

"The expanded guidance around external vulnerability protection aims to provide more consistent legal protection and process to bug hunters that want to protect themselves from being prosecuted or sued for reporting findings," says Forester Principal Analyst Sandy Carielli. "It also helps set expectations about how companies will work with researchers. Overall, the expanded guidance will help build trust between companies and security researchers." The enhanced guidance encourages more comprehensive and responsible vulnerability disclosures, says Jan Miller, CTO of threat analysis at OPSWAT, a threat prevention and data security company. "That contributes to a more secure digital ecosystem, which is especially crucial in critical infrastructure sectors where vulnerabilities can have significant repercussions," he says. ... The enhanced guidance encourages more comprehensive and responsible vulnerability disclosures, says Jan Miller, CTO of threat analysis at OPSWAT, a threat prevention and data security company. 


Europe Reaches Deal on AI Act, Marking a Regulatory First

"Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter," said Thierry Breton, the European commissioner for internal market, who had a key role in negotiations. The penalties for noncompliance with the rules can lead to fines of up to 7% of global revenue, depending on the violation and size of the company. What the final regulation ultimately requires of AI companies will be felt globally, a phenomenon known as the Brussels effect since the European Union often succeeds in approving cutting-edge regulations before other jurisdictions. The United States is nowhere near approving a comprehensive AI regulation, leaving the Biden administration to rely on executive orders, voluntary commitments and existing authorities to combat issues such as bias, deep fakes, privacy and security. European officials had no difficulty in agreeing that the regulation should ban certain AI applications such as social scoring or that regulations should take a tiered-based approach that treats high-risk systems, such as those that could influence the outcome of an election, with greater requirements for transparency and disclosure.



Quote for the day:

''It is never too late to be what you might have been." -- George Eliot