Daily Tech Digest - November 03, 2023

Embracing the co-evolution: AI's role in enriching the workforce

AI's capabilities in data processing and predictive analytics are undeniably impressive, yet it falls short in embodying human experience–empathy, contextual comprehension, and emotional intelligence. This raises the question: what can AI achieve without human involvement? For example, several automotive companies are adopting LLMs into their vehicles and systems. They use it to conduct routine checks and assist with on-road safety and predictive maintenance. But in this case, AI cannot fix any of the problems that it detects. To ensure that the challenges detected by AI are addressed, businesses will always need skilled human workers. ... If things with AI aren’t that bad, why is the popular narrative suggesting otherwise? The simple answer is, timing. The economic conditions coupled with the aftermath of the pandemic has left people bracing themselves for the next big disruption. Add the popularity of LLMs into the mix, and you have what seems like the next catastrophe. But that’s far from the truth.


Burnout: An IT epidemic in the making

Even among those who report low or moderate levels of burnout, 25% express a desire to leave their company in the near future. And burnout is also impacting skills acquisition, as 43% of Yerbo survey respondents said they had to stop studying for a certification exam because they were unable to find time due to their workloads. Further, burned-out employees who do leave are highly likely to negatively impact your company’s reputation by sharing their frustrations online and on review sites, where other potential candidates can see them. With tech talent markets always tight, increased burnout within your organization can quickly become not only a retention issue, but a recruitment problem as well. ... Burnout can’t be fixed overnight. Turning around burnout in your organization will require consistency and dedication to improving the employee experience. You’ll need to consider increases in resources, mentoring, opportunities for advancement, as well as evaluating boundaries around work-life balance and ensuring that a healthy balance is reflected and modeled all the way to the top.


Edge and beyond: How to meet the increasing demand for memory

What is needed is a way to improve direct access to offboard memory by providing on-demand access to memory across servers. The industry has recognized this and has been working on a software-defined memory solution for many years in the form of CXL. However, CXL 3.0, which provides complete caching capability, is still several years away, will require new server architecture, and will only be available in forthcoming generations of hardware. Concerns about latency compromises are surfacing, too. Even CXL 3.0 is still piggybacking on the PCI Express (PCIe) physical layer and relying on physical memory paired with PCIe, so one would ordinarily incur a penalty on a key critical metric—latency. Generally, the farther the memory is from the CPU, the higher the latency and the poorer the performance. Workloads at the heart of everything from HPC to AI have significant memory requirements. But designers struggle to make use of the additional cores available in modern CPUs. The leap forward in the number of CPU cores is mismatched with a lack of memory bandwidth.


The False Dichotomy of Monolith vs. Microservices

Sure, microservices are more difficult to work with than a monolith -- I’ll give you that. But that argument doesn’t pan out once you’ve seen a microservices architecture with good automation. Some of the most seamless and easy-to-work-with systems I have ever used were microservices with good automation. On the other hand, one of the most difficult projects I have worked on was a large old monolith with little to no automation. We can’t assume we will have a good time just because we choose monolith over microservices. Is the fear of microservices a backlash to the hype? Yes, microservices have been overhyped. No, microservices are not a silver bullet. Like all potential solutions, they can’t be applied to every situation. When you apply any architecture to the wrong problem (or worse, were forced to apply the wrong architecture by management), then I can understand why you might passionately hate that architecture. Is some of the fear from earlier days when microservices were genuinely much more difficult? 


The Software Testing Odyssey That You Need to Take

Let’s illustrate a practical scenario where a financial services company is adding new transactional functionalities to its application. Its team uses AI-powered test creation to transform its user stories and requirements into functional test scripts. The AI uses natural language processing to analyze descriptions of test requirements and convert them into executable scripts that simulate user interactions within the banking application. During testing, which is automated and runs at predefined times, a minor application layout UI change occurs. This results in a number of tests failing as the pre-existing automated tests cannot locate the update element. This is where AI-powered self-healing comes in. The AI algorithm, powered by classification AI techniques, will inspect the failed tests meticulously and compare them with previous test versions. Through this analysis, the AI identifies the UI element change that caused the failures and autonomously updates the test scripts with new locators for the UI element changes.


3 Ways of Protecting Your Public Cloud Against DDoS Attacks

While basic DDoS protection offered by CSPs is free, more advanced, or comprehensive protection options come with additional costs. This becomes quite expensive because you will need to pay a monthly fee for each account or resource, and if you need more visibility into the traffic, you must turn on and pay for an additional service. All the additional charges add up quick and turn out to be quite expensive. Best for: All in all, the native DDoS protection offered by cloud service providers offers basic protection which provides good coverage for most network-layer attacks. This will be good for those looking for cheap, no hassle, integrated protection with low latency. ... Third-party DDoS mitigation services are best for organizations looking for dedicated, advanced DDoS protection, particularly of missions-critical applications. It is also suitable for organizations which are frequently attacked, and need constant, high-grade protection. In summary, DDoS protection is a fundamental component of cybersecurity in public cloud environments. 


What is data security posture management?

As defined by Gartner, “data security posture management (DSPM) provides visibility as to where sensitive data is, who has access to that data, how it has been used and what the security posture of the data stored or application is.” The DSPM approach aims to help organizations in three ways to improve their security posture: cloud data visibility, cloud data movement and cloud data protection. Cloud data visibility: Discover shadow data rapidly expanding in the cloud with autonomous data discovery. This capability provides a powerful and frictionless way to find data that sprawls within cloud service providers and Software-as-a-Service (SaaS) apps. Understanding where your data resides helps to shrink your attack surface and reduce data risks. Cloud data movement: Analyze potential and actual data flows across the cloud. Identifying where and how data moves will help provide clarity on which data access controls and policies can best prevent vulnerabilities and misconfigurations. Cloud data protection: Uncover vulnerabilities in data and compliance controls and posture. 


Harnessing Conflict To Create An Ideal Company Culture

The ideal work environment encourages open communication and provides psychological safety for team members to share their views and opinions in a respectful way. Cultivating this type of workplace takes time, practice and training. Effective communication is a skill that not all employees are taught, especially when it comes to expressing dissent or differing points of view. Occasional training, coaching sessions and/or other materials may be necessary to teach team members how to communicate respectfully. Courses can walk through theoretical conversations and provide practical tips on how to thoughtfully explain one’s point of view without offense or personally attacking those who see things differently. Coaching sessions could also be a valuable resource so that teammates can have a person available to help them evaluate real-life scenarios that they may encounter. Often business coaching can include role-play in those scenarios that allow people to practice their new skills. Successful leaders acknowledge and appreciate a diversity of voices—even the dissenters—in their company culture.


The House of Data and Data Stewardship with Dr. James Barker

“The House of Data is loosely based on Toyota’s House of Quality, which was a hot topic when I got my master’s degree,” Barker began. “When I was at Honeywell getting their first data governance council going, we had a diagram that included things such as master data management (MDM), data quality, standards, and enforcement as part of it, but it really wasn’t resonating with people. Then I saw an example of a pillar diagram at a conference and took it back to my team to apply to our work.” The original House of Data diagram had four pillars -- data quality, data security, MDM, and compliance -- with data architecture as the floor and the governance council itself as the roof. ... At the primary level, you have your lead data stewards working together to keep things moving forward, whether aligned around a specific function (such as finance or manufacturing) or around a line of business. This type of council works best at large organizations, includes a mix of LOB and functional representation, and often meets weekly to stay up to date on what’s working and what’s not. 


Why IT and Cybersecurity Need Apprenticeships Now More Than Ever

From the apprentice’s perspective, this pathway promises numerous benefits: acquisition of in-demand skills, paid learning opportunities, valuable field-specific experience, and networking avenues with potential employers. Often, apprenticeships culminate in full-time job offers, presenting a clear trajectory for career advancement. Businesses, in parallel, stand to gain significantly. Through apprenticeships, they can nurture a workforce tailored to their unique needs, potentially reducing turnover, diversifying their teams, and boosting overall morale and productivity. However, hiring apprentices is not the slam-dunk some government agencies make it out to be. Although companies can be reimbursed for the training costs for registering an apprentice program, participation has drawbacks. The application process is time-consuming, and most states require an Apprenticeship Governance Board to approve or reject an application. While this process is admirable to retain rigor in programs, it can be streamlined. After successful registration, there are compliance steps, related training and instruction, and mentor assignments. 



Quote for the day:

“Identify your problems but give your power and energy to solutions.” -- Tony Robbins

Daily Tech Digest - November 02, 2023

How Banks Can Turn Risk Into Reward Through Data Governance

To understand why data governance is critical for banks, we must understand the underlying challenges facing financial services organizations as they modernize. Rolling out new cloud applications or Internet of Things (IoT) devices into an environment where legacy on-premises systems are already in place means more data silos and data sets to manage. Often, this results in data volumes, variety, and velocity increasing much too quickly for banks. This gives rise to IT complexity—driven by technical debt or the reliance on systems cobbled together and one-off connections. Not only that, it also raises the specter of 'shadow IT' as employees look for workarounds to friction in executing tasks. This can create difficulties for banks trying to identify and manage their data assets in a consistent, enterprise-wide way that is aligned with business strategy. Ultimately, barely controlled data leads to errant financial reporting, data privacy breaches, and non-compliance with consumer data regulations. Failing to counter these risks can lead to fines, hurt brand image, and trigger lost sales. 


Key Considerations for Developing Organizational Generative AI Policies

It's crucial to ensure that all relevant stakeholders have a voice in the process, both to make the policy comprehensive and actionable and to ensure adherence to legal and ethical standards. The breadth and depth of stakeholders involved will depend on the organizational context, such as, regulatory/legal requirements, the scope of AI usage and the potential risks associated (e.g., ethics, bias, misinformation). Stakeholders offer technical expertise, ensure ethical alignment, provide legal compliance checks, offer practical operational feedback, collaboratively assess risks, and jointly define and enforce guiding principles for AI use within the organization. Key stakeholders—ranging from executive leadership, legal teams and technical experts to communication teams, risk management/compliance and business group representatives—play crucial roles in shaping, refining and implementing the policy. Their contributions ensure legal compliance, technical feasibility and alignment with business and societal values.x


CIOs sharpen cloud cost strategies — just as gen AI spikes loom

One key skill CIOs are honing to lower costs is their ability to negotiate with cloud providers, said one CIO who declined to be named. “People better understand the charges, and [they] better negotiate costs. After being in cloud and leveraging it better, we are able to manage compute and storage better ourselves,” said the CIO, who notes that vendors are not cutting costs on licenses or capacity but are offering more guidance and tools. “After some time, people have understood the storage needs better based on usage and preventing data extract fees.” Thomas Phelps, CIO and SVP of corporate strategy at Laserfiche, says cloud contracts typically include several “gotchas” that IT leaders and procurement chiefs should be aware of, and he stresses the importance of studying terms of use before signing. ... CIOs may also fall into the trap of misunderstanding product mixes and the downside of auto-renewals, he adds. “I often ask vendors to walk me through their product quote and explain what each product SKU or line item is, such as the cost for an application with the microservices and containerization,” Phelps says. 


Misdirection for a Price: Malicious Link-Shortening Services

Security researchers gave the service the codename "Prolific Puma." They discovered it by identifying patterns in links being used by some scammers and phishers that appeared to trace to a common source. The service appears to be have active since at least 2020 and regularly is used to route victims to malicious domains, sometimes first via other link-shortening service URLs. "Prolific Puma is not the only illicit link shortening service that we have discovered, but it is the largest and the most dynamic," said Renee Burton, senior director of threat intelligence for Infoblox, in a new report on the cybercrime service. "We have not found any legitimate content served through their shortener." Infoblox, a Santa Clara, California-based IT automation and security company, published a list of 60 URLs it has tied to Prolific Puma's attacks. The URLS employ such domains as hygmi.com, yyds.is, 0cq.us, 4cu.us and regz.information. Infoblox said many domains registered by the group are parked for several weeks while being used, since many reputation-based security defenses will treat freshly registered domains as more likely to be malicious.


DNS security poses problems for enterprise IT

EMA asked research participants to identify the DNS security challenges that cause them the most pain. The top response (28% of all respondents) is DNS hijacking. Also known as DNS redirection, this process involves intercepting DNS queries from client devices so that connection attempts go to the wrong IP address. Hackers often achieve this buy infecting clients with malware so that queries go to a rogue DNS server, or they hack a legitimate DNS server and hijacks queries as more massive scale. The latter method can have a large blast radius, making it critical for enterprises to protect DNS infrastructure from hackers. The second most concerning DNS security issue is DNS tunneling and exfiltration (20%). Hackers typically exploit this issue once they have already penetrated a network. DNS tunneling is used to evade detection while extracting data from a compromised. Hackers hide extracted data in outgoing DNS queries. Thus, it’s important for security monitoring tools to closely watch DNS traffic for anomalies, like abnormally large packet sizes. The third most pressing security concern is a DNS amplification attack (20%). 


Data governance that works

Once we've found our targeted business initiatives and the data is ready to meet the needs of those initiatives, there are three major governance pillars we want to address for that data: understand, curate, and protect. First, we want to understand the data. That means having a catalog of data that we can analyze and explain. We need to be able to profile the data, to look for anomalies, to understand the lineage of that data, and so on. We also want to curate the data, or make it ready for our particular initiatives. We want to be able to manage the quality of the data, integrate it from a variety of sources across domains, and so on. And we want to protect the data, making sure we comply with regulations and manage the life cycle of the data as it ages. More importantly, we need to enable the right people to get to the right data when they need it. AWS has tools, including Amazon DataZone and AWS Glue, to help companies do all of this. It's really tempting to attack these issues one by one and to support each individually. But in each pillar, there are so many possible actions that we can take. This is why it's better to work backwards from business initiatives.


EU digital ID reforms should be ‘actively resisted’, say experts

The group’s concerns over the amendments largely centre on Article 45 of the reformed eIDAS, where it says the text “radically expands the ability of governments to surveil both their own citizens and residents across the EU by providing them with the technical means to intercept encrypted web traffic, as well as undermining the existing oversight mechanisms relied on by European citizens”. “This clause came as a surprise because it wasn’t about governing identities and legally binding contracts, it was about web browsers, and that was what triggered our concern,” explained Murdoch. ... All websites today are authenticated by root certificates controlled by certificate authorities, which assure the user that the cryptographic keys used to authenticate the website content belong to the website. The certificate owner can intercept a user’s web traffic by replacing these cryptographic keys with ones they control, even if the website has chosen to use a different certificate authority with a different certificate. There are multiple cases of this mechanism having been abused in reality, and legislation to govern certificate authorities does exist and, by and large, has worked well.


The key to success is to think beyond the obvious, to innovate and look for solutions

AI systems, including machine learning models, make critical decisions and recommendations. Ensuring the accuracy and reliability of these AI models is paramount. AI heavily relies on data and ensuring data quality, integrity, and consistency is a crucial task. Data pre-processing and validation are necessary steps to make AI models work effectively. Integration of software testing in the software development life cycle helps identify and rectify issues that could lead to incorrect predictions or decisions, minimizing the risks associated with AI tools. AI models are susceptible to adversarial attacks and robust security testing helps identify vulnerabilities and weaknesses in AI systems, protecting them from cyber threats and ensuring the safety of automated processes. Testing is not a one-time effort; it’s an ongoing process. Regular testing and monitoring are necessary to identify issues that may arise as AI models and automated systems evolve. High-quality, well-tested AI-driven automation can provide a competitive advantage.


We built a ‘brain’ from tiny silver wires.

We are working on a completely new approach to “machine intelligence”. Instead of using artificial neural network software, we have developed a physical neural network in hardware that operates much more efficiently. ... Using nanotechnology, we made networks of silver nanowires about one thousandth the width of a human hair. These nanowires naturally form a random network, much like the pile of sticks in a game of pick-up sticks. The nanowires’ network structure looks a lot like the network of neurons in our brains. Our research is part of a field called neuromorphic computing, which aims to emulate the brain-like functionality of neurons and synapses in hardware. Our nanowire networks display brain-like behaviours in response to electrical signals. External electrical signals cause changes in how electricity is transmitted at the points where nanowires intersect, which is similar to how biological synapses work. There can be tens of thousands of synapse-like intersections in a typical nanowire network, which means the network can efficiently process and transmit information carried by electrical signals.


Why public/private cooperation is the best bet to protect people on the internet

Neither the FTC nor the SEC was empowered by Congress with responsibility for cyberspace, and both have relied on pre-existing authorities related to corporate representations to bring actions against individuals who did not have corporate duties managing legal or external communications. They are using the tools at their disposal to change expectations, even if it means bringing a bazooka to a knife fight. These cases make CISOs worried that in addition to being technical experts they also need to personally become experts on data breach disclosure laws and experts on SEC reporting requirements rather than trusting their peers in the legal and communications departments of their organizations. What we need is a real partnership between the public and the private sector, clear rules and expectations for IT professionals and law enforcement, and an executive branch that will attempt regulation through rulemaking rather than through ugly and costly enforcement actions that target IT professionals for doing their jobs and further deepens the adversarial public-private divide.



Quote for the day:

"Leadership is working with goals and vision; management is working with objectives." -- Russel Honore

Daily Tech Digest - October 31, 2023

Do programming certifications still matter?

Hiring is one area where programming certifications definitely play a role. “One of the key benefits of having programming certifications is that they provide validation of a candidate's skills and knowledge in a particular programming language, framework, or technology,” says Aleksa Krstic, CTO at Localizely, a provider of a cloud-based translation platform. “Certifications can demonstrate that the individual has met certain standards and has the expertise required to perform a specific job.” For employers, programming certifications offer several advantages, Krstic says. “They can help streamline the hiring process, by providing a benchmark for assessing candidates' skills and knowledge,” he says. “Certifications can also serve as a way to filter out applicants who do not meet the minimum requirements.” In cases where multiple candidates are equally qualified, having a relevant certification can give one candidate an edge over others, Krstic says. “When it comes to certifications in general, when we see a junior to mid-level developer armed with programming certifications, it's a big green light for our hiring team,” says MichaÅ‚ Kierul is the CEO of software company SoftBlue


Overseeing generative AI: New software leadership roles emerge

In addition to line-of-business expertise, the rise of AI will mean there is also a growing focus on prompt engineering and in-context learning capabilities. Databricks' Zutshi says, "This is a newer ability for developers to optimize prompts for large language models and build new capabilities for customers, further expanding the reach and capability of AI tools." Yet another area where software leaders will need to take the lead is AI ethics. Software engineering leaders "must work with, or form, an AI ethics committee to create policy guidelines that help teams responsibly use generative AI tools for design and development," Gartner's Khandabattu reports in her analysis. Software leaders will need to identify and help "to mitigate the ethical risks of any generative AI products that are developed in-house or purchased from third-party vendors." Finally, recruiting, developing, and managing talent will also get a boost from generative AI, Khandabattu adds. Generative AI applications can speed up hiring tasks, such as performing a job analysis and transcribing interview summaries.


Generative Agile Leadership: What The Fourth Industrial Revolution Needs

Expanding the metaphor of the head, heart and hands, I've developed eight generative agile leadership (GAL) principles; they are the structure needed to create resilient teams of happy, contributing people who amplify satisfied customers and deliver outcomes for a thriving business. ... The GAL principles come from Peter Senge's learning organization and Ron Westrum’s organizational cultures. The learning organization is an adaptive entity that expands the capabilities of people and the whole system. The generative model is a performance-oriented organizational culture to ensure that people have high trust and low blame to increase the ability to express new ideas. ... The great-person leadership style that emphasizes that leaders are made and not born will not age well in the 4IR. The human-centered generative leadership model is the best approach to leading the four generations. The GAL principles are rooted in the idea that leaders should help their employees grow and develop as individuals. Generative leaders focus on creating a learning environment and providing their employees with opportunities to reach their full potential.


IT Must Clean Up Its Own Supply Chain

At the end of our supply chain “clean up” exercise, we were pleased that we had gained a good handle on our vendor services and products. This would enable us to operate more efficiently. We were also determined to never fall into this supply chain quagmire again! To avoid that, we created a set of ongoing supply chain management practices designed to maintain our supply chain on a regular basis. We met regularly with vendors, designed a “no exceptions” contract review as part of every RFP process, and no longer settled for boilerplate vendor contracts that didn’t have expressly stated SLAs. We also made it a point to attend key vendor conferences and to actively participate in vendor client forums, because we believed it would give us an opportunity to influence vendor product and service directions so they could better align with our own. End to end, this exercise consumed time, and resources, but it succeeded in capturing our attention. Attention to IT supply chains is even more relevant today as IT increasingly gets outsourced to the cloud


‘Data poisoning’ anti-AI theft tools emerge — but are they ethical?

Hancock said genAI development companies are waiting to see how aggressive “or not” government regulators will be with IP protections. “I suspect, as is often the case, we’ll look to Europe to lead here. They’re often a little more comfortable protecting data privacy than the US is, and then we end up following suit,” Hancock said. To date, government efforts to address IP protection against genAI models are at best uneven, according to Litan. “The EU AI Act proposes a rule that AI model producers and developers must disclose copyright materials used to train their models. Japan says AI generated art does not violate copyright laws,” Litan said. “US federal laws on copyright are still non-existent, but there are discussions between government officials and industry leaders around using or mandating content provenance standards.” Companies that develop genAI are more often turning away from indiscriminate scraping of online content and instead purchasing content to ensure they don’t run afoul of IP statutes. That way, they can offer customers purchasing their AI services reassurance they won’t be sued by content creators.


SEC sues SolarWinds and its CISO for fraudulent cybersecurity disclosures

The SolarWinds case could act as a pivotal point for the role of a CISO, transforming it into one that requires a lot more scrutiny and responsibility. "SolarWinds incident highlights the responsibility of CISOs of publicly listed companies in not only managing the cyberattacks but also proactively informing customers and investors about their cybersecurity readiness and controls," said Pareekh Jain, chief analyst at Pareekh Consulting. "This lawsuit highlights that there were red flags earlier that the CISO failed to disclose. This will make corporations and CISOs take notice and take proactive security disclosure more seriously similar to how CFOs take financial information disclosure seriously." "There are many unknowns here; we don’t know if the CISO 'succumbed' to pressure from other leaders or if he was complicit in the hack," said Agnidipta Sarkar, vice president for CISO Advisory at ColorTokens Inc. "In either case, he is the target. But the reality is that the CISO is a very complex role. We are constantly required to navigate internal politics and pushbacks, and unless you are on your toes, you will be at the mercy of external forces at a scale no other CXO is exposed to."


Why adaptability is the new digital transformation

Sustainability and resilience are mature management disciplines because a lot of attention has been paid to developing strategies and implementing solutions to address them. When it comes to adaptability, however, apart from agile methodologies and adaptation as it relates to climate change, there’s very little to learn from in terms of the body of work, which is why I addressed this issue in “A Guide to Adaptive Government: Preparing for Disruption.” Adaptive systems and resilient systems are often confused and thought of as interchangeable, but there’s a vast difference between the two concepts. Whereas an adaptive system restructures or reconfigures itself to best operate in and optimize for the ambient conditions, a resilient system often simply has to restore or maintain an existing steady state. In addition, whereas resilience is a risk management strategy, adaptability is both a risk management and an innovation strategy. The philosophy behind adaptive systems is more about innovation than risk management. It assumes from the start, that there are no steady state conditions to operate within, but that the external environment is constantly changing.


Bringing Harmony to Chaos: A Dive into Standardization

Companies with different engineering teams working on various products often emphasize the importance of standardization. This process helps align large teams, promoting effective collaboration despite their diverse focuses. By ensuring consistency in non-functional aspects, such as security, cost, compliance and observability, teams can interact smoothly and operate in harmony, even with differing priorities. Standardizing these non-functional elements is key for maintaining system strength and resilience. It helps in setting consistent guidelines and practices across the company, minimizing conflicts. The aim is to seamlessly integrate standardization within these elements to improve adaptability and consistency. However, achieving this standardization isn’t easy. Differences in operational methods can lead to inconsistencies. ... The aim of standardization is to create smooth and uniform processes. However, achieving this isn’t always easy. Challenges arise from different team goals, changing technology and the tendency to unnecessarily create something new.


When tightly managing costs, smart founders will be rigorous, not ruthless 

Instead of ruthless, indiscriminate cost-cutting, it is wise to be very frugal about what doesn’t matter while you continue maintaining or even moderately investing in the things that do matter. When making cuts, never lose sight of your people. They’re anxious about the future, and you can’t expect to add more stress and excessive demands to already-stressed workers. ... The outright elimination of things like team lunches, in-person meetings and little daily perks creates instant animosity. Thoughtful cuts instead create visible and tangible reminders of the current environment, especially when considering how important in-person gatherings are to sustaining a robust culture in a remote work environment. Instead of quarterly in-person employee meetups, move to annual and replace the others with a DoorDash gift card and a video meeting. Curtailing all travel — both sales calls and team meetups — not only hurts morale, it allows justifiable excuses for missed targets, lost deals and churned customers.


A Beginner's Guide to Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation is a method that combines the powers of large pre-trained language models (like the one you're interacting with) with external retrieval or search mechanisms. The idea is to enhance the capability of a generative model by allowing it to pull information from a vast corpus of documents during the generation process. ... RAG has a range of potential applications, and one real-life use case is in the domain of chat applications. RAG enhances chatbot capabilities by integrating real-time data. Consider a sports league chatbot. Traditional LLMs can answer historical questions but struggle with recent events, like last night's game details. RAG allows the chatbot to access up-to-date databases, news feeds and player bios. This means users receive timely, accurate responses about recent games or player injuries. For instance, Cohere's chatbot provides real-time details about Canary Islands vacation rentals — from beach accessibility to nearby volleyball courts. Essentially, RAG bridges the gap between static LLM knowledge and dynamic, current information.



Quote for the day:

“Vulnerability is the birthplace of innovation, creativity, and change.” -- Brené Brown

Daily Tech Digest - October 28, 2023

Surviving a ransomware attack begins by acknowledging it’s inevitable

Senior management teams that see ransomware attacks as inevitable are quicker to prioritize actions that seek to reduce the risk of an attack and contain one when it happens. This mindset redirects board-level discussions of cybersecurity as an operating expense to a long-term investment in risk management. CISOs need to be part of that discussion and have a seat on the board. With the inevitability of ransomware attacks and risks to the core part of any business, CISOs must guide boards and provide them with insights to minimize risk. A great way for CISOs to gain a seat on boards is to show how their teams drive revenue gains by providing continuous operations and reducing risks. “When your board wants to talk about ransomware, remind them that it might take the form of day-to-day improvements — in your patching cadence, how you manage identity, how you defend environments and do infrastructure as code, how you do immutable backups and so forth,” Baer told VentureBeat.


Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI

A lot of what Sutskever says is wild. But not nearly as wild as it would have sounded just one or two years ago. As he tells me himself, ChatGPT has already rewritten a lot of people’s expectations about what’s coming, turning “will never happen” into “will happen faster than you think.” “It’s important to talk about where it’s all headed,” he says, before predicting the development of artificial general intelligence (by which he means machines as smart as humans) as if it were as sure a bet as another iPhone: “At some point we really will have AGI. Maybe OpenAI will build it. Maybe some other company will build it.” Since the release of its sudden surprise hit, ChatGPT, last November, the buzz around OpenAI has been astonishing, even in an industry known for hype. No one can get enough of this nerdy $80 billion startup. World leaders seek (and get) private audiences. Its clunky product names pop up in casual conversation. OpenAI’s CEO, Sam Altman, spent a good part of the summer on a weeks-long outreach tour, glad-handing politicians and speaking to packed auditoriums around the world. 


Lack of federal data privacy law seen hurting IT security

Dawson said the challenge will be overcoming two significant misconceptions about data collection. "The two big myths in this space are, 'They already have everything, so why bother?' and, 'If you have nothing to hide, what are you worried about?'" she said. "Those are two very deliberately structured myths to enable this sense of complacency about all of this data collection." Data collection occurs in multiple facets of consumer life, whether that's through online shopping, social media, travel or even online searches. Dawson said companies bring those data points together to create a 360-degree view of a consumer. She asserted that if consumers fully grasped the extent of companies' data collection, they might not consent willingly. ... "All of this data collection -- here's the church you go to, here's the alcohol you like, here's the guilty pleasure you like to read that nobody knows about. Now all of that can be merged together and can create a very different picture about your life in a way that people are probably not going to be very comfortable with," she said.


Why Infrastructure as Code Is Vital for Modern DevOps

Due to its ability to tackle the ownership problem, DevOps teams have embraced IaC in droves. Because of its ability to abstract, simplify and standardize deployments, IaC has proven a great boon in helping teams achieve continuous integration and continuous delivery (CI/CD). IaC has proven useful to CI/CD practices because it allows DevOps teams to make iterative improvements on apps and services without having to reassign or reconfigure an underlying piece of infrastructure. With IaC, developer teams can focus just on the application, with the onus for infrastructure configuration being on the respective owner as a separate workflow. This is especially useful for complex infrastructure arrangements, such as Kubernetes clusters. Additionally, IaC instructions can be monitored, committed and reverted by teams with ease. Just as they would with a regular coding workflow, code for IaC tools can be rapidly iterated on for infrastructure reconfiguration on the fly to reflect the pace of innovation in a CI/CD environment.


Why CIOs and CDOs Need to Rethink Data Management and Tap AI To Maximize Insights

The difference between the old model and the modern approach is that in the past, business leaders were swamped with data and tried to sift through it to find business insights. Today, in the era of AI and ML, data-driven companies start with the end goal of relevant business insights and then work backwards by delving into the data. “The top-down approach is a great way to pursue business insights because, if you know what you’re looking for in terms of measuring the efficiency of your business group, and if you have day-to-day challenges that you’re looking to better understand, it would absolutely help to use that as the driver,” says Ajay. This approach ensures companies derive insights from data they know is available. Starting with specific business challenges and finding the corresponding data enables businesses to pull together data fragments, aggregate them, and define specific metrics and KPIs that can be acted on. ... One of the critical drivers for accelerating business growth is the cloud. Ajay explained how he advocates for businesses to recognize how their cloud vendors are helping them accelerate their data-driven ambitions. 


How will cyber security evolve in the data-driven world?

Data breaches within the automotive sector have become more frequent, particularly involving well-known manufacturers and brands. Earlier this year there was a data leak of Toyota customers in Japan which was publicly available for a decade due to a simple technical error. Over two million customers had data exposed—that’s nearly the entire customer base which had signed up for Toyota’s main cloud service platforms since 2012. Then, prominent automotive retailer Arnold Clarke was blackmailed by hackers after suffering a data breach. It was reported that customers had their addresses, passports and national insurance numbers leaked on the dark web following a cyber attack on the car retail giant. More recently, Tesla disclosed a data breach impacting roughly 75,000 people. Notably, this is the result of a whistle-blower leak rather than a malicious cyber attack. The compromised information includes names, contact information, and employment-related records associated with current and former employees as well as customer bank details, production secrets, and customer complaints regarding driver assistance systems.


White House to issue AI rules for federal employees

For companies developing AI, the executive order might necessitate an overhaul in how they approach their practices, according to Adnan Masood, chief AI architect at digital transformation services company UST. The new rules may also driving up operational costs initially. "However, aligning with national standards could also streamline federal procurement processes for their products and foster trust among private consumers," Masood said. "Ultimately, while regulation is necessary to mitigate AI’s risks, it must be delicately balanced with maintaining an environment conducive to innovation. "If we tip the scales too far towards restrictive oversight, particularly in research, development, and open-source initiatives, we risk stifling innovation and conceding ground to more lenient jurisdictions globally," Masood continued. "The key lies in making regulations that safeguard public and national interests while still fueling the engines of creativity and advancement in the AI sector." Masood said the upcoming regulations from the White House have been "a long time coming, and it’s a good step [at] a critical juncture in the US government's approach to harnessing and containing AI technology.


How Collaboration Among Stakeholders Can Help Better Manage Insider Threats

Not surprisingly, the most effective approach includes a combination of people, processes, and technology, starting with the latter. Perhaps the most significant challenge is detecting unauthorized or inappropriate viewing of patient records, especially given the “wide span of entry points to gain access to these environments and to the data,” said Fasolo. And while most organizations would like to be able to continuously monitor access to each and every patient record, it simply isn’t realistic. What often ends up happening, according to Culbertson, is that security teams focus their energy on mitigating serious risks. The problem with that tradeoff, however, is that most incidents don’t happen out of the blue. “If you look at an individual’s behavior retrospectively, you see that they did some benign things and built on them,” he noted. “They test the system,” realizing that low-risk incidents are far less likely to be investigated. But that’s where the real threat lies, he said, noting that Protenus’ Protect Patient Privacy solution leverages artificial intelligence to audit “every access to every record, every day. 


How Your CTO And CFO Can Work Together On Tech Costs

CFOs and CTOs need to work together to forecast the TCO annually over the life of an application for budgeting to be more reflective of the true costs to run the enterprise application. This process involves identifying the potential cost takeouts as well, because if code can be 30 percent more efficient, this would further reduce the cost. The CFO is not the only one who needs this data; everyone from the developers to the management does. Our goals as technology professionals should be to understand the efficiency and costs of the features or code we are creating before they are promoted to production. This is the only way to truly control the costs of the application and enterprise cloud bills, which are often way over budget since this mindset is not currently built into operations. ... Just like a car, every server has an engine (capacity) and gas mileage (efficiency) and is run at a level of speed that will either tax the system or is sustainable. We check these regularly as a matter of course for our cars; why not for our technology?


The Relationship Between Enterprise Tech Debt And Systemic Risk

We are currently witnessing a significant shift as boardrooms are being forced to address systemic risk. The recent changes announced by the SEC regarding cybersecurity expertise on boards are part of that shift, and boards and executive teams are being tasked with directly addressing systemic risk within their organizations. Systemic risk is one of the biggest challenges facing most organizations today, and tech debt is one of the primary drivers of systemic risk. And most executive teams don’t pay attention to either one. Yet. ... This is the significant challenge that tech debt brings to an organization. It hides under the cover of “working systems” in the background. The byproduct of tech debt is systemic risk. These aging platforms carry with them the risk of failing due to aging infrastructure and unreliable hardware, and even more importantly they have the chance of being unable to support new workflows due to poor data structures and limited connectivity options for new data pipelines. So the systemic risk builds quietly, behind the scenes, while businesses function seemingly smoothly. 



Quote for the day:

''Our expectation in ourselves must be higher than our expectation in others.'' -- Victor Manuel Rivera

Daily Tech Digest - October 27, 2023

Quishing is the new phishing: What you need to know

Consider the QR code aired during the Super Bowl. Now, imagine the company behind that commercial had malicious intent (just to be clear, the company behind that commercial did not have malicious intent). Say, for example, the QR code displayed during the ad opened your phone's browser and automatically downloaded and installed a piece of ransomware. Given the number of people who watch the Super Bowl, the outcome of that attack could have been disastrous. That's quishing. ... We've all just accepted the QR code. And, to that end, we trust them. After all, how harmful can a simple QR code be? The answer to that question is…very. And cybercriminals are counting on the idea that most consumers always assume QR codes are harmless. Those same criminals also understand that their easiest targets are those on mobile phones. Why? Because most desktop operating systems include phishing protection. Phones, on the other hand, are far more vulnerable to those attacks. At the moment, most quishing attacks involve criminals sending a QR code via email. 


Boardrooms losing control in generative AI takeover

The theme of people adopting GenAI within their workplaces without oversight from IT and security teams or leadership, a trend we might reasonably term shadow AI, is not a new one as such. Earlier this year, an Imperva report drew similar concerns, stating that an insider breach at a large organisation arising from someone using generative AI in an off-the-books capacity was only a matter of time. However, given the steadily widening scope and ever-growing capability of generative AI tools, organisations can no longer afford not to exert, at the very least, minimal oversight. “Much like bring-your-own-device [BYOD], gen AI offers massive productivity benefits to businesses, but while our findings reveal that boardroom executives are clearly acknowledging its presence in their organisations, the extent of its use and purpose are shrouded in mystery,” said Kaspersky principal security researcher, David Emm. “Given that GenAI’s rapid evolution is currently showing no signs of abating, the longer these applications operate unchecked, the harder they will become to control and secure across major business functions such as HR, finance, marketing or even IT,” said Emm.


Privacy vs convenience – which comes out ahead?

There is a mutual responsibility between employees and employers, so trust and openness are essential. On the one hand, employees must be discerning about the digital tools they employ, understanding the permissions they grant and the third parties that might gain access to their data. They also need to accept that their personal choices can impact the security of the organization too. This requires awareness and a commitment to regular audits of personal digital spaces, ensuring that no unwanted entities are lurking in the shadows. Conversely, organizations bear the responsibility of being forthright about their data practices. Companies that are transparent about the data they access - and, more importantly, why they access it—stand out as beacons of integrity. This transparency extends beyond mere access; it encompasses the entire data lifecycle, from collection to storage, usage, and eventual disposal. By openly communicating these practices, enterprises can foster a culture of trust with their employees – and comply with regulatory standards too.


How to Speed Cyberattack Discovery

A fast and reliable way to identify cyber threats is with proactive threat hunting, which utilizes human defenders armed with advanced detection and proactive response technologies and approaches, says Mike Morris, a Deloitte risk and financial advisory managing director via an email interview. “In particular, threat hunting, during which human defenders actively maneuver through their networks and systems to identify indicators of a network attack and preemptively counter these threats, can speed the discovery of cyberattacks.” Yet he warns that for threat hunting to function optimally, it’s necessary that specific, relevant, and accurate intelligence is coupled with automation to identify and mitigate the adversary’s activities. When deploying human-based threat-hunting capabilities, it’s helpful to think about the parallels to physical security leading practices, Morris says. “For example, human security guards, tasked with protecting critical assets, constantly inspect physical infrastructures and maintain the integrity of their responsible spaces by actively patrolling and investigating,” he explains.


The Financial Consequences of Inadequate Data Governance

Low-quality data can severely impact decision-making and operational efficiency. Inaccurate or incomplete data can lead to flawed strategies, missed opportunities, and ultimately, financial losses. A sales team relying on outdated customer information could waste time on leads that have already been converted or are no longer relevant, leading to lost sales opportunities and financial losses. Similarly, a marketing team using incorrect customer segmentation data could end up targeting the wrong audience, wasting advertising budget, and missing revenue targets. These real-world scenarios further illustrate the cost of poor data quality. The examples highlight the tangible impact of data quality issues on an organization's bottom line. ... Data breaches can have devastating financial consequences for organizations. The direct costs include legal fees, fines, and customer compensation, which can run into millions of dollars. Indirect costs, such as reputational damage and loss of business, can be even more damaging in the long run. 


Change Management for Zero Trust

Change management isn’t any different for Zero Trust than it is for any other big initiative. But most of us aren’t very good at change management. And security and cybersecurity are not sexy. And most people want their security to be minimally invasive and as unnoticeable as possible. And most leaders get no top-line/bottom-line joy from spending money on Zero Trust initiatives. And Zero Trust doesn’t drop new features and functionality for a product at the end of a sprint. ... Three key areas you can focus on as you get started:Get leadership engaged – If the culture of your organization is driven by urgency, craft a message and plan that leverages urgency. If the culture is driven through aspiration, use aspirational vision and goals. Either way, get leadership on board to deliver the message. Create a communications strategy – The strategy must include the rhythm and mode of communications, as well as the context and content of the communications for leadership and sponsors, leads and key centralized players, local mavens, and users. Persuasive communication is what the marketing team does well. Get them involved.


UK Prime Minister announces world’s first AI Safety Institute

"The British people should have peace of mind that we're developing the most advanced protections for AI of any country in the world," Sunak said. "I will always be honest with you about the risks, and you can trust me to make the right long-term decisions." The AI Safety Institute will assess and study these risks — from social harms like bias and misinformation, through to the most extreme risks of all - so that the UK understands what each new AI model is capable of, Sunak added. "Right now, we don't have a shared understanding of the risks that we face. Without that, we cannot hope to work together to address them." The UK will therefore push hard to agree to the first ever international statement about the nature of AI risks to ensure that, as they evolve, so does shared understanding about them, Sunak said. "I will propose that we establish a truly global expert panel to publish a State of AI Science report. Of course, our efforts also depend on collaboration with the AI companies themselves. Uniquely in the world, those companies have already trusted the UK with privileged access to their models. That's why the UK is so well-placed to create the world's first Safety Institute."


DNS Under Siege: Real-World DNS Flood Attacks

During H1 2023 there was a surge in DNS flood attacks affecting multiple organizations around the world. Looking at the DNS attacks and their characteristics we suspect that they belong to one or more global DNS attack campaigns. We were able to identify significant DNS attack activity world-wide and were able to identify correlations between attack events to different DNS servers. ... The organizations that were targeted by these DNS flood attacks all belong to the financial and commercial segment. We detected and mitigated DNS flood attacks targeting banks, retailers, insurance etc. This can hint on the DNS campaign’s agenda, and help similar organizations protect their DNS services. ... There was a single attack vector repeating in all DNS attacks and that is the random subdomain, also known as DNS water torture. Interestingly, we identified different types of invalid subdomains used in the attacks. In some cases, it was purely random with high entropy1 including case randomization and in other cases the attack subdomains were human-readable, but non-existent.


FTC eyes competitive practices for genAI

Generative AI is set to become one of the world’s most dominant industries. One projection puts the market at $76.8 billion by 2030, up from a current valuation at $11.3 billion. Goldman Sachs, for its part, boldly says the technology could drive a 7% (or nearly $7 trillion) increase in global GDP. Amidst all this, the FTC says issues could arise around control over one or more of the “key building blocks” of generative AI: data, talent and computational resources. If a single company or handful of firms controlled one of these essential inputs, “they may be able to leverage their control to dampen or distort competition,” the agency asserts. “And if generative AI itself becomes an increasingly critical tool, then those who control its essential inputs could wield outsized influence over a significant swath of economic activity.” In particular, the agency said firms could bundle and tie products — offering multiple products in a single package or conditioning the sale of one product on the purchase of another, respectively. 


Keys to effective cybersecurity threat monitoring

Over the past few years, attackers have adjusted their tactics, finding success in targeting employees with the intent of stealing their credentials. Social engineering tactics such as phishing often catch individual users out, leading to passwords, financial information and other sensitive data being breached. “In the past, they might have relied on attacking infrastructure directly through vulnerabilities or brute force attacks. While they can still happen, these attacks run a high risk of discovery before the bad actor can get in,” explained Hank Schless, director of global security campaigns at Lookout and host of the Security Soapbox Podcast. “Now, attackers are targeting individuals who likely have access to large sets of valuable cloud data. They do this with the intention of stealing those individuals’ credentials via mobile phishing attacks in order to be able to enter the organisation’s infrastructure discreetly under the guise of being a legitimate user. “This creates massive issues with monitoring for threats, because the threat looks like it’s coming from the inside if an attacker is using stolen credentials.”



Quote for the day:

"It is the capacity to develop and improve their skills that distinguishes leaders from followers." -- Warren G. Bennis