Daily Tech Digest - July 04, 2024

Understanding collective defense as a route to better cybersecurity

Organizations invoking collective defense to protect their IT and data assets will usually focus on sharing threat intelligence and coordinating threat response actions to counter malicious threat actors. Success depends on defining and implementing a collaborative cybersecurity strategy where organizations, both internally and externally, work together across industries to defend against targeted cyber threats. ... Putting this into practice requires organizations to commit to coordinating their cybersecurity strategies to identify, mitigate and recover from threats and breaches. This should begin with a process that defines the stakeholders who will participate in the collective defense initiative. These can include anything from private companies and government agencies to non-profits and Information Sharing and Analysis Centers (ISACs), among others. The approach will only work if it is based on mutual trust, so there is an important role for the use of mechanisms such as non-disclosure agreements, clearly defined roles and responsibilities and a commitment to operational transparency. 


Meaningful Ways to Reward Your IT Team and Its Achievements

With technology rapidly advancing, it's more important than ever to invest in personalized IT team skill development and employee well-being programs, which are a win-win for employees and the companies they work for, says Carrie Rasmussen, CIO at human resources software provider Dayforce, in an email interview. ... Synchronize rewards to project workflows, Felker recommends. If it's a particularly difficult time for the team -- tight deadlines, major changes, and other pressing issues -- he suggests scheduling rewards prior to the work's completion to boost motivation. "Having the team get a boost mid-stream on a project is likely to create an additional reservoir of mental energy they can draw from as the project continues," Felker says. ... It's also important to celebrate success whenever possible and to acknowledge that the outcome was the direct result of great teamwork. "Five minutes of recognition from the CEO in a company update or other forum motivates not only the IT team but the rest of the organization to strive for recognition," Nguyen says. He also advises promoting significant team achievements on LinkedIn and other major social platforms. "This will aid recruiting and retention efforts."


Deepfake research is growing and so is investment in companies that fight it

Manipulating human likeness, such as creating deepfake images, video and audio of people, has become the most common tactic for misusing generative AI, a new study from Google reveals. The most common reason to misuse the technology is to influence public opinion – including swaying political opinion – but it is also finding its way in scams, frauds or other means of generating profit. ... Impersonations of celebrities or public figures, for instance, are often used in investment scams while AI-generated media can also be generated to bypass identity verification and conduct blackmail, sextortion and phishing scams. As the primary data is media reports, the researchers warn that the perception of AI-generated misuse may be skewed to the ones that attract headlines. But despite concerns that sophisticated or state-sponsored actors will use generative AI, many of the cases of misuse were found to rely on popular tools that require minimal technical skills. ... With the threat of deepfakes becoming widespread, some companies are coming up with novel solutions that protect images online.


Building Finance Apps: Best Practices and Unique Challenges

By making compliance a central focus from day one of the development process, you maximize your ability to meet compliance needs, while also avoiding the inefficient process of retrofitting compliance features into the app later. For example, implementing transaction reporting after the rest of the app has been built is likely to be a much heavier lift than designing the app from the start to support that feature. ... The tech stack (meaning the set of frameworks and tools you use to build and run your app) can have major implications for how easy it is to build the app, how secure and reliable it is, and how well it integrates with other systems or platforms. For that reason, you'll want to consider your stack carefully, and avoid the temptation to go with whichever frameworks or tools you know best or like the most. ... Given the plethora of finance apps available today, it can be tempting to want to build fancy interfaces or extravagant features in a bid to set your app apart. In general, however, it's better to adopt a minimalist approach. Build the features your users actually want — no more, no less. Otherwise, you waste time and development resources, while also potentially exposing your app to more security risks.


OVHcloud blames record-breaking DDoS attack on MikroTik botnet

Earlier this year, OVHcloud had to mitigate a massive packet rate attack that reached 840 Mpps, surpassing the previous record holder, an 809 Mpps DDoS attack targeting a European bank, which Akamai mitigated in June 2020. ... OVHcloud says many of the high packet rate attacks it recorded, including the record-breaking attack from April, originate from compromised MirkoTik Cloud Core Router (CCR) devices designed for high-performance networking. The firm identified, specifically, compromised models CCR1036-8G-2S+ and CCR1072-1G-8S+, which are used as small—to medium-sized network cores. Many of these devices exposed their interface online, running outdated firmware and making them susceptible to attacks leveraging exploits for known vulnerabilities. The cloud firm hypothesizes that attackers might use MikroTik's RouterOS's "Bandwidth Test" feature, designed for network throughput stress testing, to generate high packet rates. OVHcloud found nearly 100,000 Mikrotik devices that are reachable/exploitable over the internet, making up for many potential targets for DDoS actors.


Set Goals and Measure Progress for Effective AI Deployment

Combining human expertise and AI capabilities to augment decision-making is an essential tenet in responsible AI principles. The current age of AI adoption should be considered a “coming together of humans and technology.” Humans will continue to be the custodians and stewards of data, which ties into Key Factor 2 about the need for high-quality data, as humans can help curate the relevant data sets to train an LLM. This is critical, and the “human-in-the-loop” facet should be embedded in all AI implementations to avoid completely autonomous implementations. Apart from data curation, this allows humans to take more meaningful actions when equipped with relevant insights, thus achieving better business outcomes. ... Addressing bias, privacy, and transparency in AI development and deployment is the pivotal metric in measuring its success. Like any technology, laying out guardrails and rules of engagement are core to this factor. Enterprises such as Accenture implement measures to detect and prevent bias in their AI recruitment tools, helping to ensure fair hiring practices. 


Site Reliability Engineering State of the Union for 2024

Automation remains at the core of SRE, with tools for container orchestration and infrastructure management playing a critical role. The adoption of containerization technologies such as Docker and Kubernetes has facilitated more efficient deployment and scaling of applications. In 2024, we can expect further advancements in automation tools that streamline the orchestration of complex microservices architectures, thereby reducing the operational burden on SRE teams. Infrastructure automation and orchestration are pivotal in the realm of SRE, enabling teams to manage complex systems with enhanced efficiency and reliability. The evolution of these technologies, particularly with the advent of containerization and microservices, has significantly transformed how applications are deployed, managed and scaled. ... With the increasing prevalence of cyberthreats and the tightening of regulatory requirements, security and compliance have become integral aspects of SRE. Automated tools for compliance monitoring and enforcement will become indispensable, enabling organizations to adhere to industry standards while minimizing the risk of data breaches and other security incidents.


5 Steps to Refocus Your Digital Transformation Strategy for Strategic Advancement

A strategy built around customer value provides measurable outcomes and drives deeper engagement and loyalty. The digital landscape is riddled with risks and opportunities due to rapid technological advancements, especially in data-centric AI. Businesses must stay agile, continually evaluating the risks and rewards of new technologies while maintaining a sharp focus on how these enhancements serve their customer base. ... Organizations with a customer advisory board should leverage it to gain insights directly from those who use their services or products. Engaging customers from the early stages of planning ensures that their feedback and needs directly influence the transformation strategy, leading to more accurate and beneficial implementations. ... One significant mistake IT leaders make is prioritizing technology over customer needs. While technology is a crucial enabler, it should not dictate the strategy. Instead, it should support and enhance the strategy’s core aim — serving the customer. IT leaders must ensure that digital initiatives align with broader business objectives and directly contribute to customer satisfaction and business efficiency.


OpenSSH Vulnerability “regreSSHion” Grants RCE Access Without User Interaction, Most Dangerous Bug in Two Decades

The good news about the OpenSSH vulnerability is that exploitation attempts have not yet been spotted in the wild. Successfully taking advantage of the exploit required about 10,000 tries to win a race condition using 100 concurrent connections under the researcher’s test conditions, or about six to eight hours to RCE due to obfuscation of ASLR glibc’s address. The attack will thus likely be limited to those wielding botnets when it is uncovered by threat actors. Given the large amount of simultaneous connections needed to induce the race condition, the RCE is also very open to being detected and blocked by firewalls and networking monitoring tools. Qualys’ immediate advice for mitigation also includes updating network-based access controls and segmenting networks where possible. ... “While there is currently no proof of concept demonstrating this vulnerability, and it has only been shown to be exploitable under controlled lab conditions, it is plausible that a public exploit for this vulnerability could emerge in the near future. Hence it’s strongly advised to patch this vulnerability before this becomes the case”.


New paper: AI agents that matter

So are AI agents all hype? It’s too early to tell. We think there are research challenges to be solved before we can expect agents such as the ones above to work well enough to be widely adopted. The only way to find out is through more research, so we do think research on AI agents is worthwhile. One major research challenge is reliability — LLMs are already capable enough to do many tasks that people want an assistant to handle, but not reliable enough that they can be successful products. To appreciate why, think of a flight-booking agent that needs to make dozens of calls to LLMs. If each of those went wrong independently with a probability of, say, just 2%, the overall system would be so unreliable as to be completely useless (this partly explains some of the product failures we’ve seen). ... Right now, however, research is itself contributing to hype and overoptimism because evaluation practices are not rigorous enough, much like the early days of machine learning research before the common task method took hold. That brings us to our paper.



Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown

Daily Tech Digest - July 03, 2024

“The artificial intelligence (AI) boom across all industries has fueled anxiety in the workforce, with employees fearing ethical usage, legal risks and job displacement,” EY said in its report. The future of work has shifted due to genAI in particular, enabling work to be done equally well and securely across remote, field, and office environments, according to EY. ... “We can say that the most common worry is that AI will impact an employee’s role – either making it obsolete entirely or changing it in a way which concerns the employee, For example, taking some of the challenge or excitement out of it,” Harris said. “And the point is, these perspectives are already having an impact – irrespective of what the future really holds.” Harris said in another Gartner survey, employees indicated they were less likely to stay with an organization due to concerns about AI-driven job loss. ... Organizations can also overcome employee AI fears and build trust by offering training or development on a range of topics, such as how AI works, how to create prompts and effectively use AI, and even how to evaluate AI output for biases or inaccuracies. And employees want to learn.


Importance of security-by-design for IT and OT systems in building a security resilient framework

Regular security testing, vulnerability assessments, penetration testing, and compliance audits are vital for identifying vulnerabilities and potential attack vectors. This proactive approach allows organizations to rectify weaknesses, enhance security posture, and protect their systems effectively. For OT systems, specialized testing methods that support the unique requirements of industrial environments are necessary. ... Developers should adopt secure coding practices, including coding standards, input validation, secure data storage, secure communication protocols, code reviews, and automated security testing. These measures help identify and mitigate security issues during the development phase, eliminating common vulnerabilities. Additionally, training developers in secure coding techniques and fostering a security-centric culture within development teams are equally crucial. ... Regular software updates and effective patch management are essential to address newly identified security vulnerabilities. Staying current with security patches and updates for all software components is crucial. 


The Case For a Managed Career for Architects

The case for making architecture a managed profession stems from a few critical factors: Rising Levels of Societal Impact: The impact of technology is growing daily. This impact and difficulties of technology, not just threats but the daily interaction of people with technology like subscription models, social media, passwords, banking etc, is increasingly important to the average person. ... Regulatory Pressure: Increasing pressure is coming to bear on all aspects of technology as it relates to government and regulation. From things like sustainability, privacy and accountability to impacts to purchasing, monopolies, identity and security. The more prevalent technology becomes in society, the more regulation that needs to be met to ensure appropriate use. ... New Technology Opportunities/Threats: Avoiding catastrophes in both small and large scopes is one function of modern professions. Non-professionals are not allowed to play with dangerous research or deploy dangerous products in most fields. ... Severe Demand/Quality Problems: The demand for high-quality architecture professionals is growing daily. This demand can no longer be met in the role-based education methods that were developed in the early rush of the 90’s. 


Is your bank’s architecture trapped in the past? It’s time to recompose it

The complexity of banking modernisation, particularly the cost and resource intensiveness associated with a big-bang approach, is one of several reasons many banks may still be stuck with a legacy technology platform. With contemporary architectural techniques such as the “strangulation pattern”, banks can achieve the desired modernisation in a streamlined manner. The strangulation pattern is a software migration strategy involving forming a new software layer, the “strangler”, around the legacy banking system. This strangler interacts with the core system’s data and functionality through well-defined APIs. Gradually, new functionalities are developed within the strangler layer in parallel with the legacy systems, allowing the bank to independently test and refine the new functionalities. Over time, more and more functionalities, based on needs and complexity, are migrated from the core system to the strangler layer. As a result, the core system becomes less and less critical and can be retired entirely or kept as a backup system. Not only does this approach minimise risk compared to a big-bank switchover, but it also allows business operations to continue with minimal disruption. 


Cyberinsurance Premiums are Going Down: Here’s Why and What to Expect

The insurance cycle is described in Wikipedia as “a term describing the tendency of the insurance industry to swing between profitable and unprofitable periods over time…” Such swings are common to all businesses but are particularly relevant to insurance. Within this insurance cycle, the swing is between a ‘hard market’ and a ‘soft market’. Howden defines it thus: “In simple terms, [a soft market] is when there is a lot of insurance capacity, and rates are low. Conversely, a hard market is when insurance capacity is reduced and premium rates are high.” Noticeably, the state of the insured does not figure. “Insurance markets (cyber, property, D&O, etc) tend to run through rating cycles,” explains George Mawdsley, head of risk solutions at DeNexus. “What makes cyber unique is that there is material uncertainty around how big the ‘Big Storms’ can get, which means capital allocators will make conservative assumptions on max downside or will not invest. Given the strong growth projections (demand) for the cyber insurance market, we expect this dynamic to drive up prices over the long term.”


Productivity and patience: How GitHub Copilot is expanding development horizons

Copilot shines in "implementing straightforward, well-defined components in terms of performance and other non-functional aspects. Its efficiency diminishes when addressing complex bugs or tasks requiring deep domain expertise." ... Copilot's greatest challenge is context, he pointed out. "Code and code development has a lot to do with the context that you're dealing with. Are you in a legacy code base or not? Are you in COBOL or in C++ or in JavaScript or TypeScript? It's a lot of context that needs to happen for the quality of that code to be high and for you to accept it." ... The impact on software development from AI will be subtler: "What if a text box is all they needed to be able to accomplish something that creates software and something that they could then derive value from?" For example, said Rodriguez: "If I could say very quickly in my phone, 'Hey, I am thinking of talking to my daughter about these things. Can you give me the last three X, Y, and Z articles and then just create a little program that we could play as a game?' You could envision Copilot being able to help you with that in the future."


How Part-Time Senior Leaders Can Help Your Business

It’s not only CEOs that benefit. With their deep functional expertise, fractionals often serve as advisors and mentors to other C-suite leaders. Barry Hurd, a fractional chief marketing officer (CMO), describes his role as providing expert counsel to full-time CMOs: “I’ve worked with a couple of CMOs who have hired me to simply double-check their work. I act as the executive coach, bringing my 30 years of wisdom and experience.” Similarly, Katie Walter, another fractional CMO, shares an experience where she supported an executive transitioning into a marketing leadership role: “She had never led the marketing function before, so the expectation was that I would work alongside her and help her to become more effective. In this case, I was introduced to the team as her coach.” The benefits also extend to the organization as a whole. Because fractional leaders often juggle multiple roles, they gain access to a wide professional network and are exposed to diverse working methods. This unique position allows them to introduce new ideas and practices among the organizations they serve. 


How the CISO Can Transform Into a True Cyber Hero

Operationalizing readiness, response, and recovery is where the rubber meets the road for the CISO. Plans, processes, and technologies underpin operations, but they each rely on people. Tabletop exercises that focus only on technical response activities strengthen only one "muscle group" of the organization. Consider a different kind of cyber exercise — a war game that involves the entire organization. By exercising the incident management plan with a broader constituency of stakeholders, organizations can build "muscle memory," test communication channels, and identify decisions or risks based on a given scenario. As part of the war game, the recovery team should run through the sequential restoration. By socializing the order in which operations will return after a disruption, the team can reduce the number of "Is it back online yet?" queries received during a real incident.  ... There's an old joke that "CISO" stands for "career is seriously over." But today’s CISO has a serious role to play as a hero for their organization. It is a simple matter of evolving from a primarily technical role to a role that incorporates empowering their human peers and stakeholders to become greater collaborators in cyber-incident response, recovery, and readiness.


How CISOs can protect their personal liability

One of the most effective and methodical methods of documentation that a CISO can maintain is a risk register that identifies existing cyber risk and records risk acceptance by relevant business stakeholders. This can help bring greater visibility into cyber risk to the board and it certainly helps CISOs to protect themselves. “In order to run a security program, you have to have a risk register. It’s like table stakes,” says Greg Notch CISO of Expel, a managed detection and response firm, and a longtime security veteran who served as CISO for the National Hockey League prior to this job. ... Even with rock solid policies, procedures, and documentation, CISOs should also seek to establish legal protection through tools like indemnification agreements, employment contractual terms, and the right level of insurance protection. Kolochenko says CISOs that are unsure of their protections should proactively reach out to their general counsel and ask them about all of their duties, liabilities, and protections. If something sounds unfavorable, push back, he says.


How New Frameworks for Cyber Metrics are Reshaping Boardroom Conversations

Ideally, boards have one or more sitting executives with risk experience, but the reality is that boards primarily consist of executives with a non-technical understanding of risk management methods. Risk and cybersecurity information must be always conveyed in easy-to-understand, business-oriented language. Start by quantifying risk in monetary or dollar terms. Board members may not understand the technical details of Monte Carlo simulations or probabilistic risk assessments, but they do need to understand the potential impact of risk on the business in the most efficient way. Quantification can help anyone understand how the business anticipates risk, prioritizes risk controls, and takes preventative action against risk. Tailor risk information to board members, depending on their expertise and the board report’s purpose. There is no one-size-fits-all approach to reporting. CISOs can segregate risk metrics into categories, like security, financial, third-party, or employee awareness risks. Grouping information together helps non-technical executives understand how risks are interconnected and what’s being done to anticipate these risks. 



Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale

Daily Tech Digest - July 02, 2024

The Changing Role of the Chief Data Officer

The chief data officer originally played more “defense” than “offense.” The position focused on data security, fraud protection, and Data Governance, and tended to attract people from a technical or legal background. CDOs now may take on a more offensive strategy, proactively finding ways to extract value from the data for the benefit of the wider business, and may come from an analytics or business background. Of course, in reality, the choice between offense and defense is a false one, as companies must do both. ... Major trends for CDOs in the future will include incorporating cutting-edge technology, such as generative AI, large language models, machine learning, and increasingly sophisticated forms of automation. The role is also spreading to a wider variety of industry sectors, such as healthcare, the private sector, and higher education. One of the major challenges is already in progress: responding to the COVID-19 pandemic. The pandemic hugely shook global supply chains, created new business markets, and also radically changed the nature of business itself. 


Duplicate Tech: A Bottom-Line Issue Worth Resolving

The patchwork nature of combined technologies can hinder processes and cause data fragmentation or loss. Moreover, differing cybersecurity capabilities among technologies can expose the organization to increased risk of cyberattacks, as older or less secure systems may be more vulnerable to breaches. Retaining multiple technologies may initially seem prudent in a merger or acquisition, but ultimately it proves detrimental. The drawbacks — from duplicated data and disconnected processes to inefficiencies and security vulnerabilities — far outweigh any perceived benefits, highlighting the critical need for streamlined, unified IT systems. ... There are compelling reasons to remove the dead weight of duplicate technologies and adopt a singular technology. The first step in eliminating tech redundancy is to evaluate existing technologies to determine which tools best align with current and future business needs. A collaborative approach with all relevant stakeholders is recommended to ensure the chosen solution supports organizational goals and avoids unnecessary repetition.


Disability community has long wrestled with 'helpful' technologies—lessons for everyone in dealing with AI

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone. This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles. ... Disability advocates have long battled this type of well-meaning but intrusive assistance—for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control. The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over. A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. 


What is the Role of Explainable AI (XAI) In Security?

XAI in cybersecurity is like a colleague who never stops working. While AI helps automatically detect and respond to rapidly evolving threats, XAI helps security professionals understand how these decisions are being made. “Explainable AI sheds light on the inner workings of AI models, making them transparent and trustworthy. Revealing the why behind the models’ predictions, XAI empowers the analysts to make informed decisions. It also enables fast adaptation by exposing insights that lead to quick fine-tuning or new strategies in the face of advanced threats. And most importantly, XAI facilitates collaboration between humans and AI, creating a context in which human intuition complements computational power.,” Kolcsár added. ... With XAI working behind the scenes, security teams can quickly discover the root cause of a security alert and initiate a more targeted response, minimizing the overall damage caused by an attack and limiting resource wastage. As transparency allows security professionals to understand how AI models adapt to rapidly evolving threats, they can also ensure that security measures are consistently effective. 


10 ways AI can make IT more productive

By infusing AI into business processes, enterprises can achieve levels of productivity, efficiency, consistency, and scale that were unimaginable a decade ago, says Jim Liddle, CIO at hybrid cloud storage provider Nasuni. He observes that mundane repetitive tasks, such as data entry and collection, can be easily handled 24/7 by intelligent AI algorithms. “Complex business decisions, such as fraud detection and price optimization, can now be made in real-time based on huge amounts of data,” Liddle states. “Workflows that spanned days or weeks can now be completed in hours or minutes.”  “Enterprises have long sought to drive efficiency and scale through automation, first with simple programmatic rules-based systems and later with more advanced algorithmic software,” Liddle says.  ... “By reducing boilerplating, teams can save time on repetitive tasks while automated and enhanced documentation keeps pace with code changes and project developments.” He notes that AI can also automatically create pull requests and integrate with project management software. Additionally, AI can generate suggestions to resolve bugs, propose new features, and improve code reviews.


How Tomorrow's Smart Cities Will Think For Themselves

When creating a cognitive city, the fundamental need is to move the computing power to where data is generated: where people live, work and travel. That applies whether you’re building a totally new smart city or retrofitting technology to a pre-existing ‘brownfield’ city. Either way, edge is key here. You’re dealing with information from sensors in rubbish bins, drains, and cameras in traffic lights. ... But in years to come the city itself will respond dynamically to the changing physical world, adjusting energy use in real-time to respond to the weather, for example. The evolution of monitoring has come from a machine-to-machine foundation, with the introduction of the Internet of Things (IoT) and now artificial intelligence (AI) becoming transformational in enabling smart technologies to become dynamic. Emerging AI technologies such as large language models will also play a role going forward, making it easy for both city planners and ordinary citizens to interact with the city they live in. Edge will be the key ingredient which gives us effective control of these cities of the future.


Serverless cloud technology fades away

The meaning of serverless computing became diluted over time. Originally coined to describe a model where developers could run code without provisioning or managing servers, it has since been applied to a wide range of services that do not fit its original definition. This led to a confusing loss of precision. It’s crucial to focus on the functional characteristics of serverless computing. The elements of serverless—agility, cost-efficiency, and the ability to rapidly deploy and scale applications—remain valuable. It’s important to concentrate on how these characteristics contribute to achieving business goals rather than becoming fixated on the specific technologies in use. Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds. ... The explosion of generative AI also contributed to the shifting landscape. Cloud providers are deeply invested in enabling AI-driven solutions, which often require specialized computer resources and significant data management capabilities, areas where traditional serverless models may not always excel.


Infrastructure-as-code and its game-changing impact on rapid solutions development

Automation is one of the main benefits of adopting an IaC approach. By automating infrastructure provisioning, IaC allows configuration to be accomplished at a faster pace. Automation also reduces the risk of errors that can result from manual coding, empowering greater consistency by standardizing the development and deployment of the infrastructure. ... Developers can rapidly assemble and deploy its infrastructure blocks, reusing them as needed throughout the development process. When adjustments are needed, developers can simply update the code the blocks are built on rather than making manual one-off changes to infrastructure components. Testing and tracking are more streamlined with IaC since the IaC code serves as a centralized and readily accessible source for documentation on the infrastructure. It also streamlines the testing process, allowing for automated unit testing of compliance, validation, and other processes before deploying. Additionally, IaC empowers developers to take advantage of the benefits provided by cloud computing. It facilitates direct interaction with the cloud’s exposed API, allowing developers to dynamically provision, manage, and orchestrate resources.


What is Multimodal AI? Here’s Everything You Need to Know

Multimodal AI describes artificial intelligence systems that can simultaneously process and interpret data from various sources such as text, images, audio, and video. Unlike traditional AI models that depend on a single type of data, multimodal AI provides a holistic approach to data processing. ... Although multimodal AI and generative AI share similarities, they differ fundamentally. For instance, generative AI focuses on creating new content from a single type of prompt, such as creating images from textual descriptions. In contrast, multimodal AI processes and understands different sensory inputs, allowing users to input various data types and receive multimodal outputs. ... Multimodal AI represents a significant advancement in the field of artificial intelligence. Therefore, by understanding and leveraging this advanced technology, data scientists and AI professionals can pave the way for more sophisticated, context-aware, and human-like AI systems, ultimately enriching our interaction with technology and the world around us. 


Excel Enthusiast to Supply Chain Innovator – The Journey to Building One of the Largest Analytic Platforms

While ChatGPT has helped raise awareness about AI capabilities, explaining how to integrate AI has presented challenges, especially when managing over 200 different data analytic reports. To address the different uses, Miranda has simplified AI into three categories: rule-based AI, learning AI (machine learning), and generative AI. Generative AI has emerged as the most dynamic tool among the three for executing and recording data analytics. Its versatility and adaptability make it particularly effective in capturing and processing diverse data sets, contributing to more comprehensive analytics outcomes. Miranda says, “People in analytics might not jump out of bed excited to tackle documentation, but it's a critical aspect of our work. Without proper documentation, we risk becoming a single point of failure, which is something we want to avoid.” ... These recordings are then converted into transcripts and securely stored in a containerized environment, streamlining the documentation process while ensuring data security. Because of process automation, Miranda says that the organization generated 240,000 work hours last year, and they anticipate even more this year.



Quote for the day:

"Life is like riding a bicycle. To keep your balance you must keep moving." -- Albert Einstein

Daily Tech Digest - July 01, 2024

The dangers of voice fraud: We can’t detect what we can’t see

The inherent imperfections in audio offer a veil of anonymity to voice manipulations. A slightly robotic tone or a static-laden voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voice fraud not only effective but also remarkably insidious. Imagine receiving a phone call from a loved one’s number telling you they are in trouble and asking for help. The voice might sound a bit off, but you attribute this to the wind or a bad line. The emotional urgency of the call might compel you to act before you think to verify its authenticity. Herein lies the danger: Voice fraud preys on our readiness to ignore minor audio discrepancies, which are commonplace in everyday phone use. Video, on the other hand, provides visual cues. There are clear giveaways in small details like hairlines or facial expressions that even the most sophisticated fraudsters have not been able to get past the human eye. On a voice call, those warnings are not available. That’s one reason most mobile operators, including T-Mobile, Verizon and others, make free services available to block — or at least identify and warn of — suspected scam calls.


Provider or partner? IT leaders rethink vendor relationships for value

Vendors achieve partner status in McDaniel’s eyes by consistently demonstrating accountability and integrity; getting ahead of potential issues to ensure there’s no interruptions or problems with the provided products or services; and understanding his operations and objectives. ... McDaniel, other CIOs, and CIO consultants agree that IT leaders don’t need to cultivate partnerships with every vendor; many, if not most, can remain as straight-out suppliers, where the relationship is strictly transactional, fixed-fee, or fee-for-service based. That’s not to suggest those relationships can’t be chummy, but a good personal rapport between the IT team and the supplier’s team is not what partnership is about. A provider-turned-partner is one that gets to know the CIO’s vision and brings to the table ways to get there together, Bouryng says. ... As such, a true partner is also willing to say no to proposed work that could take the pair down an unproductive path. It’s a sign, Bouryng says, that the vendor is more interested in reaching a successful outcome than merely scheduling work to do.


In the AI era, data is gold. And these companies are striking it rich

AI vendors have, sometimes controversially, made deals with organizations like news publishers, social media companies, and photo banks to license data for building general-purpose AI models. But businesses can also benefit from using their own data to train and enhance AI to assist employees and customers. Examples of source material can include sales email threads, historical financial reports, geographic data, product images, legal documents, company web forum posts, and recordings of customer service calls. “The amount of knowledge—actionable information and content—that those sources contain, and the applications you can build on top of them, is really just mindboggling,” says Edo Liberty, founder and CEO of Pinecone, which builds vector database software. Vector databases store documents or other files as numeric representations that can be readily mathematically compared to one another. That’s used to quickly surface relevant material in searches, group together similar files, and feed recommendations of content or products based on past interests. 


Machine Vision: The Key To Unleashing Automation's Full Potential

Machine vision is a class of technologies that process information from visual inputs such as images, documents, computer screens, videos and more. Its value in automation lies in its ability to capture and process large quantities of documents, images and video quickly and efficiently in quantities and speeds far in excess of human capability. ... Machine vision based technologies are even becoming central to the creation of automations themselves. For example, instead of relying on human workers to describe processes that are being automated when designing automations, recordings of the process to be automated are created and then machine vision software, combined with other technologies, is used to capture the process end-to-end and then provide the input to automating a lot of the work needed to program the digital workers (bots). ... Machine vision is integral to maximizing the impact of advanced automation technologies on business operations and paving the way for increased capabilities in the automation space.


Put away your credit cards — soon you might be paying with your face

Biometric purchases using facial recognition are beginning to gain some traction. The restaurant CaliExpress by Flippy, a fully automated fast-food restaurant, is an early adopter. Whole Food stores offer pay-by-palm, an alternative biometric to facial recognition. Given that they are already using biometrics, facial recognition is likely to be available in their stores at some point in the future. ... Just as credit and debit cards have overtaken cash as the dominant means to make purchases, biometrics like facial recognition could eventually become the dominant way to make purchases. There will however be actual costs during such a transition, which will largely be absorbed by consumers in higher prices. The technology software and hardware required to implement such systems will be costly, pushing it out of reach for many small- and medium-size businesses. However, as facial recognition systems become more efficient and reliable, and losses from theft are reduced, an equilibrium will be achieved that will make such additional costs more modest and manageable to absorb.


Technologists must be ready to seize new opportunities

For technologists, this new dynamic represents a profound (and daunting) change. They’re being asked to report on application performance in a more business-focussed, strategic way and to engage in conversations around experience at a business level. They’re operating outside their comfort zone, far beyond the technical reporting and discussions they’ve previously encountered. Of course, technologists are used to rising to a challenge and pivoting to meet the changing needs of their organisations and their senior leaders. We saw this during the pandemic, many will (rightly) be excited about the opportunity to expand their skills and knowledge, and to elevate their standing within their organisations. The challenge that many technologists face, however, is that they currently don’t have the tools and insights they need to operate in a strategic manner. Many don’t have full visibility across their hybrid environments and they’re struggling to manage and optimise application availability, performance and security in an effective and sustainable manner. They can’t easily detect issues, and even when they do, it is incredibly difficult to quickly understand root causes and dependencies in order to fix issues before they impact end user experience. 


Vulnerability management empowered by AI

Using AI will take vulnerability management to the next level. AI not only reduces analysis time but also effectively identifies threats. ... AI-driven systems can identify patterns and anomalies that signify potential vulnerabilities or attacks. Converting the logs into data and charts will make analysis simpler and quicker. Incidents should be identified based on the security risk, and notification should take place for immediate action. Self-learning is another area where AI can be trained with data. This will enable AI to be up-to-date on the changing environment and capable of addressing new and emerging threats. AI will identify high-risk threats and previously unseen threats. Implementing AI requires iterations to train the model, which may be time-consuming. But over time, it becomes easier to identify threats and flaws. AI-driven platforms constantly gather insights from data, adjusting to shifting landscapes and emerging risks. As they progress, they enhance their precision and efficacy in pinpointing weaknesses and offering practical guidance.


Why every company needs a DDoS response plan

Given the rising number of DDoS attacks each year and the reality that DDoS attacks are frequently used in more sophisticated hacking attempts to apply maximum pressure on victims, a DDoS response plan should be included in every company’s cybersecurity tool kit. After all, it’s not just a temporary lack of access to a website or application that is at risk. A business’s failure to withstand a DDoS attack and rapidly recover can result in loss of revenue, compliance failures, and impacts on brand reputation and public perception. Successful handling of a DDoS attack depends entirely on a company’s preparedness and execution of existing plans. Like any business continuity strategy, a DDoS response plan should be a living document that is tested and refined over the years. It should, at the highest level, consist of five stages, including preparation, detection, classification, reaction, and postmortem reflection. Each phase informs the next, and the cycle improves with each iteration.


Reduce security risk with 3 edge-securing steps

Over the past several years web-based SSL VPNs have been targeted and used to gain remote access. You may even want to consider evaluating how your firm allows remote access and how often your VPN solution has been attacked or at risk. ... “The severity of the vulnerabilities and the repeated exploitation of this type of vulnerability by actors means that NCSC recommends replacing solutions for secure remote access that use SSL/TLS with more secure alternatives,” the authority says. “The NCSC recommends internet protocol security (IPsec) with internet key exchange (IKEv2). Other countries’ authorities have recommended the same.” ... Pay extra attention to how credentials that need to be accessed are protected from unauthorized access. Ensure that you use best practice processes to secure passwords and ensure that each user has appropriate passwords and access accordingly. ... When using cloud services, you need to ensure that only those vendors you trust or that you have thoroughly vetted have access to your cloud services. 

The real key to machine learning success is something that is mostly missing from genAI: the constant tuning of the model. “In ML and AI engineering,” Shankar writes, “teams often expect too high of accuracy or alignment with their expectations from an AI application right after it’s launched, and often don’t build out the infrastructure to continually inspect data, incorporate new tests, and improve the end-to-end system.” It’s all the work that happens before and after the prompt, in other words, that delivers success. For genAI applications, partly because of how fast it is to get started, much of this discipline is lost. ... As with software development, where the hardest work isn’t coding but rather figuring out which code to write, the hardest thing in AI is figuring out how or if to apply AI. When simple rules need to yield to more complicated rules, Valdarrama suggests switching to a simple model. Note the continued stress on “simple.” As he says, “simplicity always wins” and should dictate decisions until more complicated models are absolutely necessary.



Quote for the day:

“The vision must be followed by the venture. It is not enough to stare up the steps - we must step up the stairs.” -- Vance Havner