Daily Tech Digest - March 31, 2024

Experts Concerned by Signs of AI Bubble

"Capital continues to pour into the AI sector with very little attention being paid to company fundamentals," he wrote, "in a sure sign that when the music stops there will not be many chairs available." It's been a turbulent week for AI companies, highlighting what sometimes seems like unending investor appetite for new AI ventures. Case in point is Cohere, one of the many startups focusing on generative AI, which is reportedly in late-stage discussions that would value the venture at a whopping $5 billion. Then there's Microsoft, which has already made a $13 billion bet on OpenAI, as well as hiring most of the staff from AI startup Inflection AI earlier this month. The highly unusual deal — or "non-acquisition" — raised red flags among investors, leading to questions as to why Microsoft didn't simply buy the company. According to Windsor, companies "are rushing into anything that can be remotely associated with AI." Ominously, the analyst wasn't afraid to draw direct lines between the ongoing AI hype and previous failed hype cycles. "This is precisely what happened with the Internet in 1999, autonomous driving in 2017 and now generative AI in 2024," he wrote.


A New Way to Let AI Chatbots Converse All day Without Crashing?

Typically, an AI chatbot writes new text based on text it has just seen, so it stores recent tokens in memory, called a KV Cache, to use later. The attention mechanism builds a grid that includes all tokens in the cache, an “attention map” that maps out how strongly each token, or word, relates to each other token. Understanding these relationships is one feature that enables large language models to generate human-like text. But when the cache gets very large, the attention map can become even more massive, which slows down computation. Also, if encoding content requires more tokens than the cache can hold, the model’s performance drops. For instance, one popular model can store 4,096 tokens, yet there are about 10,000 tokens in an academic paper. To get around these problems, researchers employ a “sliding cache” that bumps out the oldest tokens to add new tokens. However, the model’s performance often plummets as soon as that first token is evicted, rapidly reducing the quality of newly generated words. In this new paper, researchers realized that if they keep the first token in the sliding cache, the model will maintain its performance even when the cache size is exceeded.


AI and compliance: The future of adverse media screening in FinTech

Unlike static databases, AI systems continuously learn and adapt from the data they process. This means that they become increasingly effective over time, able to discern patterns and flag potential risks with greater accuracy. This evolving intelligence is crucial for keeping pace with the sophisticated techniques employed by individuals or entities trying to circumvent financial regulations. Furthermore, the implementation of AI in adverse media screening fosters a more robust compliance framework. It empowers FinTech companies to preemptively address potential regulatory challenges by providing them with actionable insights. This proactive approach to compliance not only safeguards the institution but also ensures the integrity of the financial system at large. Despite the promising benefits, the integration of AI into adverse media screening is not without challenges. Concerns regarding data privacy, the potential for bias in AI algorithms, and the need for transparent methodologies are at the forefront of discussions among regulators and companies alike. 


How to Get Tech-Debt on the Roadmap

Addressing technical debt is costly, emphasizing the need to view engineering efforts beyond the individual contributor level. Companies often use the revenue per employee metric as a gauge of the value each employee contributes towards the company's success. Quick calculations suggest that engineers, who typically constitute about 30-35% of a company's workforce, are expected to generate approximately one million dollars in revenue each through their efforts. ... Identifying technical debt is complex. It encompasses customer-facing features, such as new functionalities and bug fixes, as well as behind-the-scenes work like toolchains, testing, and compliance, which become apparent only when issues arise. Additionally, operational aspects like CI/CD processes, training, and incident response are crucial, non-code components of system management. ... Service Level Objectives (SLOs) stand out as a preferred tool for connecting technical metrics with business value, primarily because they encapsulate user experience metrics, offering a concrete way to demonstrate the impact of technical decisions on business outcomes.


What are the Essential Skills for Cyber Security Professionals in 2024?

It perhaps goes without saying, but technical proficiency is key. It is essential to understand how networks function, and have the ability to secure them. This should include knowledge of firewalls, intrusion detection and prevention systems, VPNs and more. Coding and scripting is also crucial, with proficiency in languages such as Python, Java, or C++ invaluable for cybersecurity professionals. ... Coding skills can enable professionals to analyse, identify, and fix vulnerabilities in software and systems, essential when carrying out effective audits of security practices. It will also be needed for evaluation of new technology being integrated into the business, to implement controls to diminish any risk in its operation. ... There’s no escaping the fact that data in general is the lifeblood of modern business. As a result, every cyber security professional will benefit from learning data analysis skills. This does not mean becoming a data scientist, but upskilling in areas such as statistics that can have a profound impact on your job. At the very least you need to be able to understand what the data is telling you. Otherwise, you’re simply following what other people – usually in the data team – tell you.


Removing the hefty price tag: cloud storage without the climate cost

Firstly, businesses should consider location. This means picking a cloud storage provider that’s close to a power facility. This is because distance matters. If electricity travels a long way between generation and use, a percentage is lost. In addition, data centers located in underwater environments or cooler climates can reduce the energy required for cooling. Next, businesses should ask green providers about what they’re doing to minimize their environmental impact. For example, powering their operations with solar, wind, or biofuels reduces reliance on fossil fuels and so lowers GHG emissions. Some facilities will house large battery banks to store renewable energy and ensure a continuous, eco-friendly power supply. Last but certainly not least, technology offers a powerful avenue for enhancing the energy efficiency of cloud storage. Some providers have been investing in algorithms, software, and hardware designed to optimize energy use. For instance, introducing AI and machine learning algorithms or frequency scaling can drastically improve how data centers manage power consumption and cooling.


RESTful APIs: Essential Concepts for Developers

One of these principles is called statelessness, which emphasizes that each request from a client to the server must contain all the information necessary to understand and process the request. This includes the endpoint URI, the HTTP method, any headers, query parameters, authentication tokens, and session information. By eliminating the need for the server to store session state between requests, statelessness enhances scalability and reliability, making RESTful APIs ideal for distributed systems. Another core principle of REST architecture is the uniform interface, promoting simplicity and consistency in API design. This includes several key concepts, such as resource identification through URIs (endpoints), the use of standard HTTP methods like GET, POST, PUT, and DELETE for CRUD operations, and the adoption of standard media types for representing resources. ... RESTful API integration allows developers to leverage the functionalities of external APIs to enhance the capabilities of their applications. This includes accessing external data sources, performing specific operations, or integrating with third-party services.


The dawn of eco-friendly systems development

First, although we are good at writing software, we struggle to create software that optimally utilizes hardware resources. This leads to inefficiencies in power consumption. In the era of cloud computing, we view hardware resources as a massive pool of computing goodness readily available for the taking. No need to think about efficiency or optimization. Second, there has been no accountability for power consumption. Developers and devops engineers do not have access to metrics demonstrating poorly engineered software’s impact on hardware power consumption. In the cloud, this lack of insight is usually worse. Hardware inefficiency costs are hidden since the hardware does not need to be physically purchased—it’s on demand. Cloud finops may change this situation, but it hasn’t yet. Finally, we don’t train developers to write efficient code. A nonoptimized versus a highly optimized application could be 500% more efficient in power consumption metrics. I have watched this situation deteriorate over time. We had to write efficient code back in the day because the cost and availability of processors, storage, and memory were prohibitive and limited. 


10 Essential Measures To Secure Access Credentials

Eliminating passwords should be every organization’s goal, but if that’s not a near-term possibility, implement advanced password policies that exceed traditional standards. Encourage the use of passphrases with a minimum length of 16 characters, incorporating complexity through variations in character types, as well as password changes every 90 days. Additionally, advocate for the use of password managers and passkeys, where possible, and educate end users on the dangers of reusing passwords across different services. ... Ensure the encryption of credentials using advanced cryptographic algorithms and manage encryption keys securely with hardware security modules or cloud-based key management services. Implement zero-trust architectures to safeguard encrypted data further. ... Implement least privilege access controls, which give users only the minimum level of access needed to do their jobs, and regularly audit permissions. Use just-in-time (JIT) and just-enough-access (JEA) principles to dynamically grant access as needed, reducing the attack surface by limiting excessive permissions.


What car shoppers need to know about protecting their data

You can ask representatives at the dealership about a carmaker’s privacy policies and if you have the ability to opt-in or opt-out of data collection, data aggregation and data monetization — or the selling of your data to third-party vendors, said Payton. Additionally, ask if you can be anonymized and not have the data aggregated under your name and your vehicle’s unique identifying number, she said. People at the dealership “might even point you towards talking to the service manager, who often has to deal with any repairs and any follow up and technical components,” said Drury. ... These days, many newer vehicles essentially have an onboard computer. If you don’t want to be tracked or have vehicle data collected and shared, you might find instructions in your owner’s manual on how to wipe out your personalized data and information from the onboard computer, said Payton. “That can be a great way if you are already in a car and you love the car, but you don’t like the data tracking,” she said. While you may not know if the data was already collected and sent out to third parties, you could do this on a periodic basis, she said.



Quote for the day:

"Thinking should become your capital asset, no matter whatever ups and downs you come across in your life." -- Dr. APJ Kalam

Daily Tech Digest - March 30, 2024

The future of Banking as a Service: Banking trends 2024

Reflecting on the banking industry trends and emerging technologies explored, what does the future hold for Banking as a Service? And is the banking system changing as a result? The unbanked remains high while financial inclusivity is low, but Banking as a Service systems are helping to change that. While their impact on the customer isn’t direct, it enables non-bank providers to explore new and untapped markets, and expand their embed offerings to underserved consumers. With these non-bank providers, fueled by BaaS, consumers aren’t restricted to traditional banking requirements and now have a wider variety of payment and credit options. As this sector progresses, we could see access and inclusivity to financial services increase, with more personalized finance solutions diversifying the industry’s offering. The adoption of Banking as a Service by traditionally non-financial entities is also a top area to watch. Companies in areas such as telecommunications, energy and utilities, and even education are integrating financial services into their systems, streamlining transactions and improving customer experience.


From Despair to Disruption: Zafran Takes on Cyber Mitigation

Zafran aims to close the gap between threat detection and remediation by anticipating and neutralizing threats before they can be exploited by attackers, according to Yashar. She wants to use the funding led by Sequoia Capital and Cyberstarts to make Zafran's platform more scalable, integrate AI to refine the mitigation knowledge base, and assemble a team of top-tier developers, researchers and analysts. "Raising is not hard when you're solving a real pain,"" Yashar said. "The biggest money is going toward the platform and hiring the best talent." On the risk assessment side, Zafran wants to take a customer's existing controls under consideration when determining what vulnerabilities pose the biggest risk to them, which Yashar said will help organizations optimize their return on investment. The company's dashboard helps customers see what risk is most exploitable as well as risk reduction activity they could carry out with their existing controls. Zafran has built a war game simulation that allows customers to check how well their cyber platform defends against existing threats and how much risk is reduced by paying for additional controls. 


Infrastructure as Code Is Dead: Long Live Infrastructure from Code

Despite the clear benefits to scale and automation that come with IaC, it remains very complex because cloud infrastructure is complex and constantly changing. As more teams are involved with cloud provisioning, they have to agree how best to use IaC tools and learn the nuances of each one they choose. With these added pressures, fresh solutions promising to improve the developer experience without increasing risk are emerging. To create the next generation of solutions, organizations need to understand where the problems truly lie for development, platform engineering and security teams. ... With multiple tools and frameworks to choose from, learning new languages and tools can be difficult for teams whose experience stems from manual infrastructure provisioning or writing application code. In addition to requiring a new programming language and interface, most IaC tools define and support infrastructure and resource management using declarative languages. That means teams must learn how to define the desired state of the infrastructure environment rather than outlining the steps required to achieve a result, a challenge for new and experienced programmers alike.


The most versatile, capable category twisted pair ever

There are still many situations that don’t require the versatility and performance of Cat6A throughout the entire network but require at the very least multi-gigabit in specific areas of the network for specific applications, but without the hassle of mitigation efforts. The GigaSPEED XL5 solution, a new addition to the GigaSPEED family, addresses the growing sweet spot for a Category 6 solution that can support the intermediate 2.5 and 5.0 GBE bandwidth demands, guaranteed and without mitigation. GigaSPEED XL5 cables can support four connections in a 100 meters channel to support 5G Ethernet. So, it’s ideal for connecting wireless access points located in the ceiling. And because the cable diameter is only slightly larger than GigaSPEED XL cables, the installation tools and procedures are the same as well. Some companies are now beginning the transition from Wi-Fi 6 to more bandwidth heavy Wi-Fi 6E. It will be several more years before the migration to Wi-Fi 7 and its 10+ GbE demands. As a result, the GigaSPEED XL5 solution has an important role to play in enterprise networks for many years to come.


OpenAI Tests New Voice Clone Model

With hyper-realistic voice generation, a criminal could trick family members into scams or worse. And with an election cycle coming up, concerns about deepfakes used to spread misinformation are growing. “This is a massive, dual-edged sword,” Saxena tells InformationWeek in a phone interview. “This could be another nail in the coffin for truth and data privacy. This adds yet more of an unknown dynamic where you could have something that can create a lot of emotional distress and psychological effects. But I can also see a lot of positives. It all depends on how it gets regulated.” ... Max Ball, a principal analyst at Forrester, says voice cloning software already exists, but the efficiency of OpenAI’s model could be a game-changer. “It’s a pretty strong step in a couple ways,” Ball tells InformationWeek in an interview. “Today, from what the vendors are showing me, you can do a custom voice, but it takes 15-20 minutes of voice to be able to train it. While 15 minutes doesn’t sound like a lot of time, it’s tough to get anyone to sit down for 15 minutes during a day of work.”


How to Tame Technical Debt in Software Development

Huizendveld provided some heuristics that have helped him tame technical debt: If you can fix it within five minutes, then you should. Try to address technical debt by improving your domain model. If that is too involved, you could resort to a technical hack. In the event that that is too involved, try to at least automate the solution. But sometimes even that is too difficult; in this case, make a checklist for the next time. Agree on a timebox for the improvement that you introduce with the team. How much time are you willing to invest in a small improvement? That defines your timebox. Now it is up to you and the team to honour that timebox, and if you exceed it, make a checklist and move on. Don’t fix it yourself, if it can be fixed by machines. If it is messy because you have a lot of debt, then make it look messy. Please don’t make a tidy list of your technical debt. The visual should inspire change. Only people with skin in the game are allowed to pay off debt, in order to prevent solutions that don’t work in practice.


How New Tech Is Changing Banking

At its core, blockchain provides a shared record of transactions that is updated in real time. This allows complete transaction transparency while eliminating inefficiencies and risks associated with manual processes. All participants in a blockchain network can view a single source of truth. For banking, blockchain delivers enhanced security and lower fraud risk. Records cannot be altered without agreement from all network participants, preventing falsified or duplicated transactions. Data is also cryptographically secured and distributed across the network. Even if one location is compromised, the data remains validated and secured. Blockchain also brings new levels of efficiency to banking. With an immutable record and smart contracts that execute automatically, blockchain eliminates laborious reconciliation and confirmation steps. Settlement times can be reduced from days to minutes. These efficiencies translate into lower operational costs for banks. By removing intermediaries and allowing peer-to-peer transactions, blockchain also opens up new opportunities in banking. From micropayments to decentralized finance, blockchain enables models that are impossible with traditional infrastructure.


Cloud Email Filtering Bypass Attack Works 80% of the Time

After examining Sender Policy Framework (SPF)-specific configurations for 673 .edu domains and 928 .com domains that were using either Google or Microsoft email servers along with third-party spam filters, the researchers found that 88% of Google-based email systems were bypassed, while 78% of Microsoft systems were. The risk is higher when using cloud vendors, since a bypass attack isn't as easy when both filtering and email delivery are housed on premises at known and trusted IP addresses, they noted. The paper offers two major reasons for these high failure rates: First, the documentation to properly set up both the filtering and email servers is confusing and incomplete, and often ignored or not well understood or easily followed. Second, many corporate email managers err on the side of making sure that messages arrive to recipients, for fear of deleting valid ones if they institute too strict a filter profile. "This leads to permissive and insecure configurations," according to the paper. ... the fact that configuring all three of the main email security protocols — SPF,  DMARC, and DKIM — are needed to be truly effective at stopping spam.


RPA promised to solve complex business workflows. AI might take its job

Like most enterprise software companies, RPA vendors are experimenting with generative AI technologies. "Generative AI is poised to amplify the accessibility and scalability of RPA, mitigating the predominant obstacles to entry, namely the need for specialized developers and the risk of bot failure," Saxena said. Alex Astafyev, co-founder and chief business development officer at ElectroNeek, agreed that generative AI will make it much easier to use RPA technology inside companies that have their expensive software developers committed to other projects. "While many RPA platforms follow a low-code approach, thus allowing non-tech users to build automation bots, the knowledge of variables and programming logic might be needed in certain cases. Integration of AI lowers the barrier even further," he said. ... Generative AI technology will also allow RPA systems to deal with complicated problems described with natural language inputs, Pandiarajan said. “In the near future, it is conceivable that you could ask a bot about the status of a customer's package in the fulfillment process, and the AI would understand the process and provide real-time updates," he said.


Why CDOs Need AI-Powered Data Management to Accelerate AI Readiness in 2024

Historically, data and AI governance have been marred by complexity, hindered by siloed systems and disparate standards. However, the urgency of the AI-driven future demands a paradigm shift. Enter modern cloud-native integrated tools – the catalysts for simplifying the adoption of data and AI governance. Pezzetta wishes to leverage AI to clean data and look for anomalies. By leveraging a modernized solution approach, organizations can streamline governance processes, breaking down silos and harmonizing standards across disparate datasets. These tools offer scalability, flexibility, and interoperability, empowering stakeholders to navigate the complexities of data and AI governance with ease. ... “We need to bring AI into our processes. Therefore, we need to define governance processes to develop AI and data together with hubs in business on centralized platforms with integration patterns. I would love to get AI functions in ETL (extract, transform, and load) processes. I hope that we start to use AI in the data pipelines to enhance data quality,” Zimmer adds.



Quote for the day:

“When you fail, that is when you get closer to success.” -- Stephen Richards

Daily Tech Digest - March 29, 2024

Suspected MFA Bombing Attacks Target Apple iPhone Users

Multifactor bombing attacks — also known as multifactor fatigue attacks — are a social engineering exploit in which attackers flood a target's phone, computer, or email account with push notifications to approve a login or a password reset. The idea behind these attacks is to overwhelm a target with so many second-factor authentication requests that they eventually accept one either mistakenly or because they want the notifications to stop. Typically, these attacks have involved the threat actors first illegally obtaining the username and password to a victim account and then using a bombing or fatigue attack to obtain second-factor authentication to accounts protected by MFA. In 2022, for instance, members of the Lapsus$ threat group obtained the VPN credentials for an individual working for a third-party contractor for Uber. They then used the credentials to repeatedly try and log in to the contractor's VPN account triggering a two-factor authentication request on the contractor's phone each time — which the contractor ultimately approved. The attackers then used the VPN access to breach multiple Uber systems.


Finding software flaws early in the development process provides ROI

Unfortunately, enterprise software development teams at many organizations are not finding security-related software flaws as they write their software. As a result, such flaws get shipped in the applications used by customers, partners, suppliers, and employees. This creates serious security risks as threat actors might find and use these flaws to breach enterprise applications and move laterally throughout their target environments. Once a security-related flaw is published to software used in production, the race is on to find the bug first. If a company is lucky, the flaw will be found during a software security assessment by its internal security team or perhaps a third-party provider. If the flaw lingers too long, it’s more likely to be found by an attacker targeting the organization in the hopes of stealing data or perhaps conducting a ransomware attack. The security and increased trust associated with quality software are clear. The return on investment and the business benefits of high-quality and secure software are not always well understood.


7 tips for leading without authority

Leading without authority starts with individuals deciding to create change then bringing people together over a shared goal. If you believe in this strategy and want it to work, make it fundamental to your organizational structure. “We made an intentional transition at the onset of the pandemic to be a fully remote organization,” says Elaine Mak, chief people and performance officer at Valimail. “At the same time, we transitioned from a founder-led to a team-led model.” That transition involved democratizing decision-making, relying on experts within the organization, and leaning into letting people create outcomes through collaboration. “I brought the phrase, ‘Don’t be right, get it right,’ into the organization,” says Seth Blank, Valimail’s CTO. “It’s at the crux of the question of how to lead without authority. If you’re the expert and you bring a team together, come in with humility and ask, ‘How do we do this? How do we learn together?’ If you do that, you can move mountains from anywhere on the organization — if the organization is set up to respond. You need the culture, and you need leaders who expect that. Then people can do amazing things,” Blank says.


Chainguard: Outdated Containers Accumulate Vulnerabilities

EOL software is software that is no longer supported by the creator of the application, either because it is an older version of the software that is no longer maintained, or because the entities that maintained the software are no longer around at all. In either case, vulnerabilities can still be found in these applications, and since they are no longer patched, they soon become a focus for actors with malicious intent. “And the problem becomes aggravated when using container images,” Dunlap writes. “Using a container often means adding additional components from underlying ‘base images,’ which can easily lead to images with hundreds of components, each a part of the attack surface.” The problem only grows worse over time for users, as without regular updates, applications get harder and harder to update to the latest version over time. Looking at software projects listed on endoflife.date, Dunlap found that the longer a project has been EOL, the more vulnerabilities that image will collect. This inspection included images for Traefik, F5’s NGINX, Rust, and Python.


Cisco: Security teams are ‘overconfident’ about handling next-gen threats

Part of the problem that most companies are facing, according to Cisco, is the complicated nature of their security stacks. More than two-thirds of respondents said that their company had more than 10 separate offerings in their security stack, and a quarter said they had 30 or more. “This reflects the way in which the industry has evolved over the years,” the report read. “As new threats emerged, new solutions were developed and deployed to counter them, either by existing vendors or new ones.” Frank Dickson, group vice president for IDC’s security and trust research practice, said that the concern about complicated tool stacks is far from a new one. “We’ve been having that debate in security for ten years,” he said. Efforts to centralize security systems have been around for just as long, he said, but for too long, the offerings peddled as “platforms” weren’t really anything of the sort — more bundles of interrelated products than true foundations for all-around security. That’s finally beginning to change, however, Dickson said.


Saga Pattern in Microservices Architecture

In a typical microservice-based architecture, where a single business use case spans multiple microservices, each service has its own local datastore and localized transaction. When it comes to multiple transactions, and the number of microservices is vast, there comes the requirement to handle the transaction spanning various services. ... The Saga Pattern is an architectural pattern for implementing a sequence of local transactions that helps maintain data consistency across different microservices. The local transaction updates its database and triggers the next transaction by publishing a message or event. If a local transaction fails, the saga executes a series of compensating transactions to roll back the changes made by the previous transactions. This ensures that the system remains consistent even when transactions fail. ... The Saga Pattern can be implemented in two different ways. Choreography: In this pattern, the individual microservices consume the events, perform the activity, and pass the event to the next service. ... Orchestration: In this pattern, all the microservices are linked to the centralized coordinator that orchestrates the services in a predefined order, thus completing the application flow.


Cisco warns of password-spraying attacks targeting VPN services

Security researcher Aaron Martin told BleepingComputer that the activity observed by Cisco is likely from an undocumented malware botnet he named ‘Brutus.’ The connection is based on the particular targeting scope and attack patterns. Martin has published a report on the Brutus botnet describing the unusual attack methods that he and analyst Chris Grube observed since March 15. The report notes that the botnet currently relies on 20,000 IP addresses worldwide, spanning various infrastructures from cloud services to residential IPs. The attacks that Martin observed initially targeted SSLVPN appliances from Fortinet, Palo Alto, SonicWall, and Cisco but have now expanded to also include web apps that use Active Directory for authentication. Brutus rotates its IPs every six attempts to evade detection and blocking, while it uses very specific non-disclosed usernames that are not available in public data dumps. This aspect of the attacks raises concerns about how these usernames were obtained and might indicate an undisclosed breach or exploitation of a zero-day vulnerability.


Understanding Polyglot Persistence: A Necessity for Modern Software Engineers and Architects

Unlock the Power of Polyglot Persistence with ‘Polyglot Persistence Unleashed.’ This comprehensive guide embarks you on a transformative journey, illustrating the integration of MongoDB, Cassandra, Neo4J, Redis, and Couchbase within Enterprise Java Architecture. It delves deep into NoSQL databases, Jakarta EE, and Microprofile, empowering you with the prowess to architect and implement sophisticated data storage solutions for robust and scalable Java applications. From in-depth exploration to practical examples, optimization strategies, and pioneering insights, this book is your ultimate guide to revolutionizing data management in your Java applications. ... The Jakarta Data specification is a beacon of innovation for Java developers. It offers a potent API that effortlessly bridges the diverse worlds of relational and NoSQL databases. It fosters seamless integration of data access and manipulation, adhering to a domain-centric architecture that simplifies persistence complexities. 


What Is AI TRiSM, and Why Is it Time to Care?

The goal of AI TRiSM is to place the necessary trust, risk and security guardrails around AI systems so that enterprises can ensure that these systems are accurate, secure and compliant. This can be a daunting undertaking, for while there are many years of governance experience and best practices for traditional applications and structured system of records data, there are few established best practices when it comes to managing and analyzing AI structured and unstructured data, and their applications, algorithms and machine learning. How, for instance, do you vet all of the incoming volumes of data from research papers all over the world that your AI might be analyzing in an effort to develop a new drug? Or how can you ensure that you are screening databases for the best job candidates if you are only using your company’s past hiring history as your reference? ... In contrast, AI systems have few established maintenance practices. When AI is first deployed, it’s checked against what subject matter experts in the field would conclude, and it must agree with what these experts conclude 95% of the time. Over time, business, environmental, political and market conditions change.


Graph Databases: Benefits and Best Practices

The problems that can develop when working with graph databases include using inaccurate or inconsistent data and learning to write efficient queries. Accurate results rely on accurate and consistent information. If the data going in isn’t reliable, the results coming out cannot be considered trustworthy. This data query issue can also be a problem if the stored data uses non-generic terms while the query uses generic terminology. Additionally, the query must be designed to meet the system’s requirements. Inaccurate data is based on information that is simply wrong. Blatant errors have been included. Inaccurate data may include a wrong address, a wrong gender, or any number of other errors. Inconsistent data, on the other hand, describes a situation with multiple tables in a database working with the same data, but receiving it from different inputs with slightly different versions (misspellings, abbreviations, etc.). Inconsistencies are often compounded by data redundancy. Graph queries interrogate the graph database, and these queries need to be accurate, precise, and designed to fit the database model. The queries should also be as simple as possible. 



Quote for the day:

“Accomplishments will prove to be a journey, not a destination.” -- Dwight D. Eisenhower

Daily Tech Digest - March 28, 2024

‘ShadowRay’ vulnerability on Ray framework exposes thousands of AI workloads, compute power and data

The vulnerability was disclosed to Anyscale along with four others in late 2023 — but while all the others were quickly addressed, CVE-2023-48022 was not. Anyscale ultimately disputed the vulnerability, calling it “an expected behavior and a product feature” that enables the “triggering of jobs and execution of dynamic code within a cluster.” ... Ray doesn’t have authorization because it is assumed that it will run in a safe environment with “proper routing logic” via network isolation, Kubernetes namespaces, firewall rules or security groups, the company says. This decision “underscores the complexity of balancing security and usability in software development,” the Oligo researchers write, “highlighting the importance of careful consideration in implementing changes to critical systems like Ray and other open-source components with network access.” However, disputed tags make these types of attack difficult to detect; many scanners simply ignore them. To this point, researchers report that ShadowRay did not appear in several databases, including Google’s Open Source Vulnerability Database (OSV). Also, they are invisible to static application security testing (SAST) and software composition analysis (SCA)


Data governance in banking and finance: a complete guide

Data stewardship is an important concept in data governance that is crucial for creating a culture of accountability and transparency around data management. Data stewards are intermediaries between IT and business units, ensuring that data quality is up to the established standard. In principle, data stewardship creates actors within the organization who are interested in and can be held accountable for data management. This helps mitigate data-related risks and maximize the value of data assets. Appointing data stewards alone doesn't fulfill the accountability cycle. Real accountability in data governance goes beyond the operational level. It needs senior management's active involvement. The sophistication and complexity of the accountability and management structures around data governance depend on the data they will govern. Banks are considered to be enterprises with the highest level of data complexity with an additional challenge of regulatory maneuvers. However, the governance infrastructure's exact scale varies with the bank's size. 


Will a Google-Apple deal kill Microsoft’s AI dominance?

Even if the deal goes through, Microsoft could still dominate AI. It has a substantial lead in AI, and it’s not taking anything for granted. OpenAI has been quickly releasing new, more powerful versions of GPT — version 4 was released in 2023, and it looks as if a “materially better” version 5 will be available this summer. So ChatGPT and Copilot are constantly becoming more powerful. In addition, Microsoft just hired Mustafa Suleyman, co-founder of DeepMind, which was bought by Google in 2014 and which ultimately became Gemini. After Suleyman sold DeepMind, he founded another AI startup, Inflection AI, and Microsoft has hired not just Suleyman, but nearly the entire AI staff of Inflection, including its chief scientist KarĂ©n Simonyan. Microsoft now has the best AI talent in the world either on staff or working for OpenAI. Microsoft has also been busy monetizing AI. Copilot is now built into the company’s entire product line, offered as a fee-based add-on. Microsoft can plow that revenue back into research. And, of course, it’s not a foregone conclusion that Google and Apple will make a deal. Even if they do, it’s not clear how well it will work.


The increasing potential and challenges of digital twins

Evidently, there are many commonalities across these domains when it comes to current obstacles and opportunities for digital twins — but at the same time, there is also variability in how digital twins are perceived and used depending upon the specific challenges faced by each research community. Accordingly, the National Academies of Sciences, ... The report — recapitulated by Karen Willcox and Brittany Segundo in a Comment — proposes a cross-domain definition for digital twins based on a previously published definition and highlights many issues and gaps also echoed by some of the manuscripts in the Focus, such as the critical role of verification, validation, and uncertainty quantification; the notion that a digital twin should be ‘fit for purpose’, and not necessarily an exact replica of the corresponding physical twin; the need for protecting individual privacy when relying on identifiable, proprietary, and sensitive data; the importance of the human in the loop; and the need of sophisticated and scalable methods for enabling an efficient bidirectional flow of data between the virtual and physical assets.


Why CTOs Must Become Better Storytellers

David Lees, CTO of Basis Technologies, says impactful storytelling by CTOs can help demonstrate a complete understanding of stakeholder needs. “Most CTOs know their technological offerings inside and out, and how they can help the organization in the immediate and longer term,” he says. However, CTOs will need to communicate their expertise in a way that is accessible to other C-suite members in non-tech departments, turning complex, jargon-heavy ideas into simpler narratives. Gaining inspiration from stakeholders is not a one-size-fits-all exercise, so an in-depth knowledge of everyone empowers CTOs to tailor their communication on a case-by-case basis. Some employees or investors are motivated by facts and figures, for example pointing out how recent upgrades have doubled service speeds in comparison to a competitor. ... Petrovskis says he recommends ditching whitepapers and reading case studies, but most important is to get out in front of your customers. “Don’t get me wrong, there’s a time and place for whitepapers, but they don’t really provide the real feel of customer issues and understanding the issues your customers face will allow you to be far more relatable to the audiences you’re trying to reach,” he explains.


Navigating the Complexities of Data Privacy: Balancing Innovation and Protection

Certainly, the regulations surrounding the use of personal data have evolved significantly since the Cambridge Analytica scandal, in which a British consulting group obtained personal data from millions of Facebook users without their consent for political advertising purposes. Both Meta (Facebook’s parent company) and Google have introduced privacy guides — albeit somewhat intricate — aimed at empowering users to prevent a recurrence of such a notorious incident. Yet, while tech giants like Google and Facebook can readily afford the expenses associated with robust privacy measures, it raises concerns about the potential burden imposed on innovative but underfunded startups. Fledgling entities, brimming with promising ideas, may find themselves constrained by the necessity for extensive privacy controls, hindering their abilities. For tech businesses, adapting to these privacy laws can mean increased compliance costs and potential innovation delays. For consumers, while their data rights are better protected, the experience of using digital services may become more cumbersome due to consent requirements. 


Patchless Apple M-Chip Vulnerability Allows Cryptography Bypass

The new vulnerability is associated with a performance optimization feature called data memory-dependent prefetchers (DMP) in Apple's M1, M2, and M3 microprocessors, which are used to preemptively cache data; they allow the chip to anticipate the next bit of information that it will need to access, which speeds up processing times. DMP "predicts memory addresses to be accessed in the near future and fetches the data into the cache accordingly from the main memory," according to the paper. Apple's specific take on DMP takes prefetching a step further by also considering the content of memory to determine what to fetch, the researchers noted — and therein lies the problem. Many developers use a coding practice or technique called constant-time programming, especially developed for cryptographic protocols. The idea behind constant-time programming is to ensure that a processor's execution time remains the same, regardless of whether the inputs are secret keys, plaintext, or any other data. The goal is to ensure that an attacker cannot derive any useful information by simply observing execution times or by tracing the code's control flow and memory accesses.


AI-Driven Cloud Revolution: Transforming Business Operations and Efficiency

AI-driven optimizations have a significant impact on cloud expenditure for businesses, driving cost savings and efficiency gains across various dimensions. AI algorithms analyze usage patterns to predict resource needs, enabling businesses to automatically scale resources up or down as needed. This eliminates over-provisioning and under-provisioning, ensuring optimal resource utilization and avoiding wasted costs. AI automates tasks like resource management and infrastructure optimization, reducing the need for dedicated personnel. AI helps identify and eliminate underutilized resources and predict hardware failures, preventing downtime and associated expenses. Data management is also optimized by archiving less-accessed data in cheaper tiers and utilizing compression techniques, further reducing storage costs. To help businesses propel, at G7 CR, we reduce their “Cloud spend by minimum 25%”. Also, as mentioned earlier, we are launching the “AI Apps Program”, a cost-effective way to leverage AI and achieve extravagant results.


How AI-powered employee experiences can create an engaged workforce?

AI-driven recruitment platforms are transforming this landscape by automating repetitive tasks, identifying top talent more efficiently, and enhancing the overall candidate experience. AI algorithms help recruiters to analyze vast amounts of data to identify patterns and predict candidate success, leading to more informed hiring decisions. Additionally, AI-powered chatbots can engage with candidates in real-time, providing personalised support and information throughout the application and onboarding process. Virtual assistants, for instance, can improve communication and shorten response times by giving staff members immediate access to resources, information, and assistance. To promote a culture of appreciation and recognition, managers require AI-driven feedback and recognition platforms to promptly provide feedback and acknowledge their team members. Virtual assistants powered by AI can also address common HR inquiries, provide access to relevant policies and procedures, and even offer personalised recommendations for stress management and self-care. Several businesses have started using AI-powered tools to monitor and control employee engagement.


Hackers Developing Malicious LLMs After WormGPT Falls Flat

Crooks are looking into hiring AI experts to exploit private GPTs developed by OpenAI and Bard to jailbreak restrictions put in place by the application developers and create malicious GPTs, he said. "They're looking for things that will help them with generating code that actually does what you're supposed to do and doesn't have all hallucinations," Maor said. A March 19 report by Recorded Future highlights threat actors using generative AI to develop malware and exploits. The report identifies four malicious use cases for AI that do not require fine-tuning the model. The use cases include using AI to evade detection tools used by LLM applications that use YARA rules to identify and classify malware. "These publicly available rules also serve as a double-edged sword," the report said. "While they are intended to be a resource for defenders to enhance their security measures, they also provide threat actors with insights into detection logic, enabling them to adjust their malware characteristics for evasion." Using the technique, Recorded Future altered SteelHook, a PowerShell info stealer used by APT28 that submits the malware source code to an LLM system. 



Quote for the day:

"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown

Daily Tech Digest - March 27, 2024

‘Observability’ Is Not Observability When It Comes to Business KPIs

We all find ourselves in a continual search for faster and faster identification and resolution, but what we really want is to switch to a paradigm of being “proactive.” After all, if we only focus on solving issues faster — falsely equating proactivity with speed — then we’ll forever be responding to fire drills based on technical KPIs. Sure, we’ll get faster at them, but we won’t be making the best decisions for our business. “Proactivity” means running all engineering efforts based on leading indicators of core business metrics. Indicators that map to purchase flows, startup times, user abandonment — the KPIs that are specific to our apps that reflect what our business cares about, like churn, revenue and LTVs. These leading indicators should be specific to our business and should ultimately connect with the end user of our apps. And so the true goal for whatever structure of data that we end up with is that it must reflect the end-user experience — not myopic, disconnected backend metrics and KPIs. Anything less and we cannot connect technical failures to business failures — and definitely not without massive amounts of toil and guesswork.


How security leaders can ease healthcare workers’ EHR-related burnout

EHR systems have been designed to facilitate the billing and documentation aspects of patient care, with health management and patient needs often being an afterthought. For example, charting solutions have recently been adding the ability for patients to exchange messages with their providers via patient portals. This addresses patients’ needs to communicate with their provider, but – without careful design –puts an additional burden on clinicians who now need to spend unbillable time to respond to messages that are interrupting their day. ... Things would be so easy if we didn’t have to put up with those security controls! Thus, a call to action: Take a closer look at where in the ecosystem your policies and/or tooling might contribute to issues that play into a less-than-optimal user experience for your healthcare system’s workforce. By (re-)evaluating how control requirements can be met without standing in the way of modernizing record management systems, CISOs may be able to identify opportunities that will help their CTOs with the task at hand while maintaining an appropriate risk posture


Risky business: 6 steps to assessing cyber risk for the enterprise

A BIA is used to determine the potential business impact should any information asset or system have its confidentiality, availability, or integrity compromised. The first step in a BIA is to identify all relevant information assets, such as customer and financial data, and information used for the operation of services and systems, across all environments and across the entire information lifecycle. Once assets are identified, a value can be assigned to them. Then the extent of any potential security incident can be determined by comparing realistic scenarios comprising the most reasonable impact with worst-case scenarios for each asset. ... Threat profiling starts with the identification of potentially relevant threats through discussion with key stakeholders and analyzing available sources of threat intelligence. Once the threat landscape is built, each threat it contains should be profiled. Threats can be profiled based on two key risk factors: likelihood of initiation — the likelihood that a particular threat will initiate one or more threat events — and threat strength, or how effectively a particular threat can initiate or execute threat events. Threats can also be further profiled by separating them into an overarching group: adversarial, accidental, or environmental.


How Artificial Intelligence Will First Find Its Way Into Mental Health

Although there are many challenges when relying on an artificial bot to interact with patients, there are still areas where artificial intelligence can augment decision-making. Health insurance companies already see the value in AI in reducing costs by identifying patients who are high utilizers of health care services. Prescribing providers routinely receive notifications from health insurance companies regarding irregular refills of prescriptions to encourage discontinuation of prescriptions that are not optimally used. Indeed, large insurance companies possess sizable datasets that are currently being analyzed to predict the onset of Alzheimer’s, diabetes, heart failure, and chronic obstructive pulmonary disease (COPD). In fact, AI has already become FDA-approved for specific uses, and currently, AI shines when it is applied to a very specific clinical issue. AI systems are initially being sought to enhance clinical judgment rather than replace clinical judgment. Ideally, AI will enhance clinician productivity by handling mundane tasks and alerting to that which may be equivocal and require further investigation by a human.


Women in IT: 'Significant Strides' Have Been Made, Yet Challenges Persist

The biggest challenge continues to be the underrepresentation of women in the tech industry, according to Meredith Graham, chief people officer at Ensono. "While we have seen improvement in recent years, there is still a large gap, particularly at the senior levels," she said. "To address discrimination and microaggressions, there can't be only one or two women in the room." Graham admits this isn't going to change overnight, and to create a safe work environment for women, there needs to be a collective effort within leadership to continue to create inclusive workplaces and to not tolerate discrimination at any level. "There are several strategies, but the two I have seen success with are women's mentorship programs and ensuring that women are considered for leadership positions," she said. Mentorship programs can encourage and foster growth as well as help women overcome any self-doubt when they have experienced senior mentors guiding them. "We've all had challenges throughout our careers, and learning that those challenges can be overcome is important for continued growth," Graham said.


5 Leadership Misconceptions That Hinder Success

There is a misconception that a leader's role is to dictate orders, perpetuating a command-and-control mentality. Leadership requires action, and leaders are the ultimate decision-makers in a company. However, command-and-control leadership stifles creativity and discourages open communication. Great leaders establish an inclusive working environment where collaboration flourishes, innovative ideas are shared freely, and team members are empowered to contribute their expertise — even if it means challenging preconceived notions. A leader's role is not just to give orders but to inspire, guide and facilitate the success of the team. ... Some leaders think they need to insulate their employees from bad news so the team doesn't get deflated by business challenges. But when leaders shut off communication, the team ends up making up their own stories to fill in the gaps, and the leader ends up isolated. As Jim Collins says, "Face the brutal facts." Great leaders respect their team, win their hearts and minds when they are transparent and see them as partners in overcoming challenges. Transparent communication also creates shared accountability.


Think you can ignore quantum computing? Think again.

Even before the algorithms are officially approved this summer, CIOs should start taking steps. Moody recommends they start by doing a cryptographic inventory to see which public key crypto systems they and their partners use. This isn’t easy, but several vendors are developing tools to help with that process. CIOs can also ensure they assign somebody to lead in the transition, and that they have the funding and expert staff they need. Organizations can also start testing the algorithms in their environments and check their supply chain partners are doing the same. Jeff Wong, global chief innovation officer at EY, says even if they’re not yet required to make a change, CIOs can already start planning NIST-approved algorithms into their cybersecurity upgrades. ... Another thing CIOs should do is protect against “store-now, decrypt-later” attacks. Hackers may be collecting encrypted data already that they can decrypt once quantum computers become big enough and reliable enough to run Shor’s algorithms. Some industries are more affected than others, such as healthcare, financial services, and higher education, where medical records, financial information, and academic records need to be protected for a lifetime.


Striking a balance for sustainable growth in the AI-driven data center

When density rises, however, the extra heat generated creates a challenge because it means additional cooling is required. Meeting that need can take the form of innovative liquid and immersive technologies. At Data4, we are harnessing liquid cooling at our Marcoussis site in the Paris region with European cloud provider OVHcloud and have plans to expand this method to all our campuses. As we expand, this type of optimization is paramount when analyzing new sites and entering new markets. This is the case with the development of our new data center in the city of Hanau near Frankfurt, our first in Germany. With plans to invest €1 billion-plus to develop the 180MW facility on the 25-hectare site in stages until 2032, it will be one of the largest and most powerful data center campuses in Europe. Data centers of such scale are comparatively more efficient than smaller ones, having sufficient space to allow for scaling up to meet accelerated demand and therefore helping future-proof investments to a degree. 


The Value of an IT Architect – Why Focusing on Outcomes

First and foremost an IT architect’s main role is to drive change that creates business opportunity through technology innovation. IT architects shape and translate business and IT strategy needs into realizable, sustainable technology solutions, whilst taking end-to-end solution delivery ownership from idea to benefits delivery. Without an IT architect most solutions will end in being more expensive to operate, delivery will be late and customer satisfaction will be poorer. So, the main value of an IT architect is to reduce cost, risk and increase quality. There are several papers that details the value of IT enterprise architecture. One example is a article issued in the Journal of Systemics Cybernetics and Informatics in March 2018 by Kurek et al. The research paper provided empirical indications for the effects of enterprise architecture on 3076 IT projects in 28 organizations. It summarised that it had seen an increase of 14,5% of successful projects, and a decrease of 26,2% of failed projects when the organization has an enterprise architecture. Other studies focusing in on enterprise architecture are finding similar results (see [2-6]), albeit some seem to be contradictory.


Alert: Hackers Hit High-Risk Individuals' Personal Accounts

"This is not a mass campaign against the public but a persistent effort to target people whom attackers consider to hold information of interest," says its guidance for high-risk individuals. The NCSC defines high-risk individuals in a cybersecurity context as anyone whose "work or public status means you have access to, or influence over, sensitive information that could be of interest to nation-state actors." This includes anyone who works in the political sphere, including elected legislators, candidates, staff, and activists as well as academics, lawyers, journalists and human rights groups. Hackers typically pick the fastest, easiest and least technical strategy required to achieve their goal, and that increasingly includes targeting not just high-profile individuals but also their families, said Chris Pierson, the CEO and founder of cybersecurity firm BlackCloak. "We saw this really increase in 2022 with attacks on personal cell numbers and emails in the Twilio, Uber and Zendesk attacks," he said. "We saw, publicly, executives being targeted in association with attacks on large companies like MGM and Dragos."



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - March 26, 2024

What Every CEO Needs To Know About The New AI Act

The act says “AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.” So it’s good to see that limiting the ways it could cause harm has been put at the heart of the new laws. However, there is a fair amount of ambiguity and openness around some of the wording, which could potentially leave things open to interpretation. Could the use of AI to target marketing for products like fast food and high-sugar soft drinks be considered to influence behaviors in harmful ways? And how do we judge whether a social scoring system will lead to discrimination in a world where we’re used to being credit-checked and scored by a multitude of government and private bodies? ... The act makes it clear that AI should be as transparent as possible. Again, there’s some ambiguity here—at least in the eyes of someone like me who isn’t a lawyer. Stipulations are made, for example, around cases where there is a need to “protect trade secrets and confidential business information.” But it’s uncertain right now how this would be interpreted when cases start coming before courts.


What’s behind Italy’s emergence as a key player in Europe’s digital landscape?

Regional cloud providers can respond promptly to needs that hyperscalers do not meet, equipped with more flexible offerings, highly customized services, and attention to local specificities. These are increasingly popular and insistent demands from businesses that require greater flexibility and customization of cloud services to adapt to their specific needs and a widespread presence in particularly strategic geographical regions to offer services that better meet local or sectoral needs. As a result, regions like Italy are increasingly becoming preferred cloud regions, and the data center sector is taking the same parallel path, which sees Italy as Europe's newest data hub. Credit is also due to local providers breaking away from the 'one size fits all' dynamic, offering tailor-made and ad hoc services for the needs of companies migrating to the cloud. ... Combined with the geographic benefits of being based in Italy, the current socio-economic climate, and the focus on regulatory compliance, Italy is well-positioned to solidify its place as a significant player in the future of the European cloud and data center scene.


Customer science: A new CIO imperative

Science is defined by many as the rigorous and systematic identification and measurement of phenomena. In both the for-profit and nonprofit sectors the most important phenomenon is customer behavior and mindset. Customer science puts customer behavior and mindset under a microscope. Is your organization good at customer science? Does your organization measure customer experience? Does your organization employ “scientists” to observe and explain customer behavior based on the data you have collected? ... The path to customer science is fraught with paradoxes. The organizational paradox is that if the “Customer is King” why is there no one in the enterprise with the authority to ensure that every interaction meets or exceeds expectations. Is this the role of the now very much in vogue chief customer officer? The chief experience officer? Glenn Laverty, now retired and former president and CEO at Ricoh Canada, finessed this responsibility/authority paradox tying every employees’ compensation to customer experience/satisfaction metrics. What gets measured and what gets rewarded drive behavior. 


Enhancing Secure Software Development With ASOC Platforms

There are many ways to adopt DevSecOps. For those looking to avoid complicated setups, the market offers ASOC-based solutions. These solutions can help companies save time, money, and labor resources while also reducing the time to market for their products. ASOC platforms enhance the effectiveness of security testing and maintain the security of software in development without delaying delivery. Gartner's Hype Cycle for Application Security, 2021, indicated that the market penetration of these solutions ranged from 5 to 20% among the intended clients. The practical uptake of this technology is low primarily because of limited awareness about its availability and benefits. ASOC solutions incorporate Application Security Testing (AST) tools into existing CI/CD pipelines, facilitating transparent and real-time collaboration between engineering teams and information security experts. These platforms offer orchestration capabilities, meaning they set up and execute security pipelines, as well as carry out correlation analysis of issues identified by AST tools, further aggregating this data for comprehensive insight.


The cybersecurity skills shortage: A CISO perspective

Experienced cybersecurity professionals are poached daily, enticed with higher compensation and better working situations. Successful CISOs keep an eye on employee satisfaction and make sure to help staff manage stress levels. Active CISOs also open avenues for staff to grow their skill sets and career opportunities. ... There’s no reason why cybersecurity staff should be underpaid or underappreciated. Proactive CISOs educate the brass on competitive salary comparisons and risks/costs associated with understaffed teams and employee attrition. When it comes to cybersecurity staffing, executives must understand the foolishness of tripping over dollars to pick up pennies. ... How do you bolster staff efficiency without adding more bodies? Automate any process that can be automated. Automating security operations processes is a good start, but advanced organizations move beyond security alone and think about process automation across lifecycles that span security, IT operations, and software development. Examples could include finding/patching software vulnerabilities, segmenting networks, or DevSecOps programs.


Misaligned upskilling priorities could hinder AI progress

“The rapid rise of AI requires business leaders to build and shape the future workforce now to thrive or risk lagging behind in a future transformed by a seismic shift in the skills needed for the era of intelligence,” said Libby Duane-Adams, Chief Advocacy Officer at Alteryx. “Not all employees need to become data scientists. It’s about championing cultures of creative problem-solving, learning to look at business problems through an analytic lens, and collaborating across all levels to empower employees to use data in everyday roles. Continuous investments in data literacy upskilling and training opportunities will create the professional trajectories where everyone can “speak data” and exploit AI applications for trusted, ethical outcomes.” “As India invests US$1.2 billion in a wide range of AI projects, the country’s is set to become a significant force for shaping the future of AI” said Souma Das, Managing Director, India Sub-continent at Alteryx. “As organisations gear up for the future, our research highlights how imperative it is to nurture a diverse workforce with a range of data and analytics abilities to ensure employees are empowered to navigate the dynamic landscape together.


Want to be a DevOps engineer? Here's the good, the bad, and the ugly

"The DevOps ecosystem is huge and constantly evolving," he added. "Tools and frameworks so popular yesterday may be replaced by new alternatives. On top of your regular job as an engineer, you probably need to give up some of your free time for studying." Even when you gain more experience, "the learning doesn't stop," Henry said. "In fact, it's commonly noted as one of the things that DevOps engineers love most about their job. With the pace of development and introduction of AI tools like ChatGPT, DevOps engineering today won't be the same as DevOps engineering two or three years from now." One aspect that may separate passionate DevOps engineers from other colleagues is the infrastructure management part of the job. "If you're not a fan of managing infrastructure, you're going to struggle," Henry cautioned. "This is a big one. As a DevOps engineer, I spend a huge amount of time setting up, configuring, and maintaining the cloud infrastructure that supports various applications. This means dealing with servers databases networks and security on a daily basis. Now, if this excites you, great. This world could be perfect."


Decoding AI success: The complete data labeling guide

Data labeling is essential to machine learning data pre-processing. Labeling organizes data for meaning. It then trains a machine learning model to find “meaning” in new, relevantly similar data. In this process, machine learning practitioners seek quality and quantity. Because machine learning models make decisions based on all labeled data, accurately labeled data in larger quantities creates more useful deep learning models. In image labeling or annotation, a human labeler applies bounding boxes to relevant objects to label an image asset. Taxis are yellow, trucks are yellow, and pedestrians are blue. A model that can accurately predict new data (in this case, street view images of objects) will be more successful if it can distinguish cars from pedestrians. ... Locating and training human labelers (annotators) starts data labeling projects. Annotators must be trained on each annotation project’s specifications and guidelines because use cases, teams, and organizations have different needs. After training, image and video annotators will label hundreds or thousands of images and videos using home-grown or open-source labeling tools. 


4 steps to improve root cause analysis

It’s easier for devops teams to point to problems in the network and infrastructure as the root cause of a performance issue, especially when these are the responsibility of a vendor or another department. That knee-jerk response was a significant problem before organizations adapted devops culture and recognized that agility and operational resiliency are everyone’s responsibility. “The villain when there are application performance issues is almost always the network, and it’s always the first thing we blame, but also the hardest thing to prove,” says Nicolas Vibert of Isovalent. “Cloud-native and the multiple layers of network virtualization and abstraction caused by containerization make it even harder to correlate the network as the root cause issue.” Identifying and resolving complex network issues can be more challenging when building microservices, applications that connect to third-party systems, IoT data streams, and other real-time distributed systems. This complexity means that IT ops need to monitor networks, correlate them to application performance issues, and perform network RCAs more efficiently.


From Chaos to Clarity: Streamlining DevSecOps in the Digital Era

No development team deliberately sets out to build and deploy an insecure application. The reason applications with known vulnerabilities are deployed so often is because the cognitive load associated with discovering and remediating them is simply too high. The average developer can only allocate 10% to 20% of their time remediating vulnerabilities. The rest of their time is spent either writing new code or maintaining the application development environment used to write that code. If organizations want more secure applications, they need to find ways to make it easy for developers to correlate, prioritize and contextualize the vulnerabilities as they are being identified. Most of the time when developers are informed a vulnerability has been discovered in their code, they have long since lost context. Vulnerabilities need to be immediately identified at the time code is written, builds are created, and pull requests are made – and identified in a way that is actionable. Otherwise, that vulnerability is likely to be thrown atop the massive pile of technical debt that developers hope they’ll one day have the time to address. 



Quote for the day:

“Let no feeling of discouragement prey upon you, and in the end you are sure to succeed.” -- Abraham Lincoln