Daily Tech Digest - August 31, 2024

CTO to CTPO: Navigating the Dual Role in Tech Leadership

A competent CPTO can streamline processes, reduce the risk of misalignment, and offer a clear vision for both product and technology initiatives. This approach can also be cost-effective, as executive roles come with high salaries and significant demands. Combining these roles simplifies the organizational structure, providing a single point of contact for research and development. This works well in environments where product and technology are closely integrated and mature in the product and technology systems. In my role, most of my day-to-day activities are focused on the product. I’m very conscious that I don’t have a counterpart to challenge my thinking, so I spend a lot of time with senior business stakeholders to ensure the debates and discussions occur. I also encourage this in my leadership team to ensure that technology and product leaders are rigorous in their thinking and decision-making. Ultimately, deciding to have one or two roles for product and technology depends on a company’s specific needs, maturity, and strategic priorities. For some, clarity and focus come from having both a CPO and a CTO. For others, the simplicity and unified vision that comes from a single leader makes more sense.


How quantum computing could revolutionise (and disrupt) our digital world

Everything that is encrypted today could potentially be laid bare. Banking, commerce, and personal communications—all the pillars of our digital world—could be exposed, leading to consequences we’ve never encountered. Thankfully, Q-Day is estimated to be five to ten years away, mainly because building a stable quantum computer is fiendishly difficult. The processors need to be cooled to near absolute zero, among other technical challenges. But make no mistake—it’s coming. Sergio stressed that businesses and countries need to prepare now. Already, some groups are harvesting encrypted data with the intention of decrypting it when quantum computing capabilities mature. Much like the Y2K bug, Q-Day requires extensive preparation. This August, the National Institute of Standards and Technology (NIST) released the first set of post-quantum encryption standards designed to withstand quantum attacks. Similarly, the UK’s National Cyber Security Centre (NCSC) advises that migrating to post-quantum cryptography (PQC) is a complex, multi-year effort that requires immediate action.


Transparency is often lacking in datasets used to train large language models

Researchers often use a technique called fine-tuning to improve the capabilities of a large language model that will be deployed for a specific task, like question-answering. For finetuning, they carefully build curated datasets designed to boost a model’s performance for this one task. The MIT researchers focused on these fine-tuning datasets, which are often developed by researchers, academic organizations, or companies and licensed for specific uses. When crowdsourced platforms aggregate such datasets into larger collections for practitioners to use for fine-tuning, some of that original license information is often left behind. “These licenses ought to matter, and they should be enforceable,” Mahari says. For instance, if the licensing terms of a dataset are wrong or missing, someone could spend a great deal of money and time developing a model they might be forced to take down later because some training data contained private information. “People can end up training models where they don’t even understand the capabilities, concerns, or risk of those models, which ultimately stem from the data,” Longpre adds.


Cyber Insurance: A Few Security Technologies, a Big Difference in Premiums

Finding the right security technologies for the business is increasingly important, because ransomware incidents have accelerated over the past few years, says Jason Rebholz, CISO at Corvus Insurance, a cyber insurer. Attackers posted the names of at least 1,248 victims to leak sites in the second quarter of 2024, the highest quarterly volume to date, according the firm. ... "We take VPNs very seriously in how we price [our policies] and what recommendations we give to our companies ... and this is mostly related to ransomware," Itskovich says. For those reasons, businesses should take a look at their VPN security and email security, if they want to better secure their environments and, by extension, reduce their policy costs. Because an attacker will eventually find a way to compromise most companies, having a way to detect and respond to threats is vitally important, making managed detection and response (MDR) another technology that will eventually pay for itself, he says. ... For smaller companies, email security, cybersecurity-awareness training, and multi-factor authentication are critical, says Matthieu Chan Tsin, vice president of cybersecurity services for Cowbell. 


Cybersecurity for Lawyers: Open-Source Software Supply Chain Attacks

A supply chain attack co-opts the trust in the open-source development model to place malicious code inside the victim’s network or computer systems. Essentially, the attacker inserts malicious code, like a foodborne virus, into the software during its development process, positioning the malicious code to be unintentionally installed by the end user installing the software within their network. Any organization using the affected project has unwittingly invited the malicious code within its walls. Malicious code may already reside within a newly adopted OSS project, or it could be delivered via an updated version of a trusted project. The difference between an OSS supply chain attack and a traditional supply chain attack (e.g., inserting malware into proprietary software) is that the organization using OSS has access to its entire code at the outset and throughout its use (and can therefore examine it for vulnerabilities or otherwise have greater insight into how it functions when used maliciously). While some organizations may have the resources and wherewithal to leverage this as a security advantage, many will not.


A Measure of Motive: How Attackers Weaponize Digital Analytics Tools

IP geolocation utilities can be used legitimately by advertisers and marketers to gauge the geo-dispersed impact of advertising reach and the effectiveness of marketing funnels (albeit with varying levels of granularity and data availability). However, Mandiant has observed IP geolocation utilities used by attackers. Some real-world attack patterns that Mandiant has observed leveraging IP geolocation utilities include:Malware payloads connecting to geolocation services for infection tracking purposes upon successful host compromise, such as with the Kraken Ransomware. This allows attackers a window into how fast and how far their campaign is spreading. Malware conditionally performing malicious actions based on IP geolocation data. This functionality allows attackers a level of control around their window of vulnerability and ensures they do not engage in “friendly fire” if their motivations are geo-political in nature, such as indiscriminate nation-state targeting by hacktivists. An example of this technique can be seen in the case of the TURKEYDROP variant of the Adwind malware, which attempts to surgically target systems located in Turkey.


AI development and agile don't mix well

Interestingly, several AI specialists see formal agile software development practices as a roadblock to successful AI. ... "While the agile software movement never intended to develop rigid processes -- one of its primary tenets is that individuals and interactions are much more important than processes and tools -- many organizations require their engineering teams to universally follow the same agile processes." ... The report suggested: "Stakeholders don't like it when you say, 'it's taking longer than expected; I'll get back to you in two weeks.' They are curious. Open communication builds trust between the business stakeholders and the technical team and increases the likelihood that the project will ultimately be successful."Therefore, AI developers must ensure technical staff understand the project purpose and domain context: "Misunderstandings and miscommunications about the intent and purpose of the project are the most common reasons for AI project failure. Ensuring effective interactions between the technologists and the business experts can be the difference between success and failure for an AI project."


A quantum neural network can see optical illusions like humans do. Could it be the future of AI?

When we see an optical illusion with two possible interpretations (like the ambiguous cube or the vase and faces), researchers believe we temporarily hold both interpretations at the same time, until our brains decide which picture should be seen. This situation resembles the quantum-mechanical thought experiment of Schrödinger’s cat. This famous scenario describes a cat in a box whose life depends on the decay of a quantum particle. According to quantum mechanics, the particle can be in two different states at the same time until we observe it – and so the cat can likewise simultaneously be alive and dead. I trained my quantum-tunnelling neural network to recognise the Necker cube and Rubin’s vase illusions. When faced with the illusion as an input, it produced an output of one or the other of the two interpretations. Over time, which interpretation it chose oscillated back and forth. Traditional neural networks also produce this behaviour, but in addition my network produced some ambiguous results hovering between the two certain outputs – much like our own brains can hold both interpretations together before settling on one.


How To Channel Anger As An Emotional Intelligence Strategy

If you want to use anger in a constructive way, you first have to break the mental stigma that “Anger is bad.” Anger, like all emotions, is an instinctual response. Rather than label this response as good or bad, it’s more useful to think of it simply as data. Your emotions offer you data, and you can harness that data in a number of ways. ... The second half of the battle is to learn to use your anger with intent. To do so, you have to understand the potential for anger to hijack your behavior. “[Anger] can also be a negative,” Scherzer warned in his same interview. “It has been [for me] in the past, where you almost get too much adrenaline, too much emotion, and you aren’t thinking clearly.” In other words, Scherzer doesn’t just dial in anger and then see what happens. He channels it with purpose. Even though he may appear intense or even hotheaded, his intent is strong. And that intent is what enables him to harness his anger in a constructive way. ... Since this is a more advanced emotional intelligence strategy, there are a couple of things you should keep top of mind. First, if you’re the kind of person whose anger frequently gets in your way, you should likely focus your time on management strategies, not this one. Second, you should start by applying this strategy in a lower-stakes situation.


How to Improve Your Leadership Style With Cohort-Based Leadership Training

Cohort-based learning is rooted in Albert Bandura's social learning theory. Social interaction improves learning because humans are social creatures by nature. Hence, we enjoy learning more from interactive, multimedia methods than passive ones that lack feedback or immediate results. Perspective-taking and mentalizing in cohorts promote empathy and communication skills, while emotional resonance and dialogue deepen understanding for all involved. The accountability that forms in groups encourages commitment and performance. Community-based learning, feedback, emotional support and real-world application ignite individual and collective learning. ... The structured curriculum is designed to cover various aspects of leadership, building upon previous sessions to provide a comprehensive learning journey. Practical tools, measurements and models are provided to apply directly to the work environment. Real-time feedback and consulting during group sessions help participants tackle specific workplace challenges, allowing for continuous learning, application and feedback to support their development.



Quote for the day:

“A bend in the road is not the end of the road unless you fail to make the turn.” -- Helen Keller

Daily Tech Digest - August 30, 2024

Balancing AI Innovation and Tech Debt in the Cloud

While AI presents incredible opportunities for innovation, it also sheds light on the need to reevaluate existing governance awareness and frameworks to include AI-driven development. Historically DORA metrics were introduced to quantify elite engineering organizations based on two critical categories of speed and safety. Speed alone does not indicate elite engineering if the safety aspects are disregarded altogether. AI development cannot be left behind when considering the safety of AI-driven applications. Running AI applications according to data privacy, governance, FinOps and policy standards is critical now more than ever, before this tech debt spirals out of control and data privacy is infringed upon by machines that are no longer in human control. Data is not the only thing at stake, of course. Costs and breakage should also be a consideration. If the CrowdStrike outage from last month has taught us anything it’s that even seemingly simple code changes can bring down entire mission-critical systems at a global scale when not properly released and governed. This involves enforcing rigorous data policies, cost-conscious policies, compliance checks and comprehensive tagging of AI-related resources.


AI and Evolving Legislation in the US and Abroad

The best way to prepare for regulatory changes is to get your house in order. Most crucial is having an AI and data governance structure. This should be part of the overall product development lifecycle so that you’re thinking about how data and AI is being used from the very beginning. Some best practices for governance include: Forming a cross-functional committee to evaluate the strategic use of data and AI products; Ensuring you have experts from different domains working together to design algorithms that produce output that is relevant, useful and compliant; Implementing a risk assessment program to determine what risks are at issue for each use case; Executing an internal and external communication plan to inform about how AI is being used in your company and the safeguards you have in place. AI has become a significant, competitive factor in product development. As businesses develop their AI program, they should continue to abide by responsible and ethical guidelines to help them stay compliant with current and emerging legislation. Companies that follow best practices for responsible use of AI will be well-positioned to navigate current rules and adapt as regulations evolve.


The paradox of chaos engineering

Although chaos engineering offers potential insights into system robustness, enterprises must scrutinize its demands on resources, the risks it introduces, and its alignment with broader strategic goals. Understanding these factors is crucial to deciding whether chaos engineering should be a focal area or a supportive tool within an enterprise’s technological strategy. Each enterprise must determine how closely to follow this technological evolution and how long to wait for their technology provider to offer solutions. ... Chaos engineering offers a proactive defense mechanism against system vulnerabilities, but enterprises must weigh its risks against their strategic goals. Investing heavily in chaos engineering might be justified for some, particularly in sectors where uptime and reliability are crucial. However, others might be better served by focusing on improvements in cybersecurity standards, infrastructure updates, and talent acquisition. Also, what will the cloud providers offer? Many enterprises get into public clouds because they want to shift some of the work to the providers, including reliability engineering. Sometimes, the shared responsibility model is too focused on the desire of the cloud providers rather than their tenants. You may need to step it up, cloud providers.


Generative AI vs large language models: What’s the difference?

While generative AI has become popular for content generation more broadly, LLMs are making a massive impact on the development of chatbots. This allows companies to provide more useful responses to real-time customer queries. However, there are differences in the approach. A basic generative AI chatbot, for example, would answer a question with a set answer taken from a stock of responses upon which it has been trained. Introducing an LLM as part of the chatbot set-up means its response will become much more detailed and reactive and just like the reply has come from a human advisor, instead of from a computer. This is quickly becoming a popular option, with firms such as JP Morgan embracing LLM chatbots to improve internal productivity. Other useful implementations of LLMs are to generate or debug code in software development or to carry out brainstorms or research tasks by tapping into various online sources for suggestions. This ability is made possible by another related AI technology called retrieval augmented generation (RAG), in which LLMs draw on vectorized information outside of its training data to root responses in additional context and improve their accuracy.


Agentic AI: Decisive, operational AI arrives in business

Agentic AI, at its core, is designed to automate a specific function within an organization’s myriad business processes, without human intervention. AI agents can, for example, handle customer service issues, such as offering a refund or replacement, autonomously, and they can identify potential threats on an organization’s network and proactively take preventive measures. ... Cognitive AI agents can also serve as assistants in the healthcare setting by engaging with a patient daily to support mental healthcare treatment, and as student recruiters at universities, says Michelle Zhou, founder of Juji AI agents and an inventor of IBM Watson Personality Insights. The AI recruiter could ask prospective students about their purpose of visit, address their top concerns, infer the students’ academic interests and strengths, and advise them on suitable programs that match their interests, she says. ... The key to getting the most value out of AI agents is getting out of the way, says Jacob Kalvo, co-founder and CEO of Live Proxies, a provider of advanced proxy solutions. “Where agentic AI truly unleashes its power is in the ability to act independently,” he says. 


Protecting E-Commerce Businesses Against Disruptive AI-driven Bot Threats

Bot attacks have long been a thorn in the side of e-commerce platforms. With the growing number of shoppers regularly interacting and sharing their data on retail websites combined with high transaction volumes and a growing attack surface, these online businesses have been a lucrative target for cybercriminal activity. From inventory hoarding, account takeover, and credential stuffing to price scraping and fake account creation, these automated threats have often caused significant damage to e-commerce operations. By using a variety of sophisticated evasion techniques in distributed bot attacks such as rapidly rotating IPs and identities and manipulating HTTP headers to appear as legitimate requests, attackers have been able to evade detection by traditional bot detection tools.  ... With the evolution of Generative AI models and its increasing adoption by bot operators, bot attacks are expected to become even more sophisticated and aggressive in nature. In the future, Gen AI-based bots could be able to independently learn, communicate with other bots, and adapt in real-time to an application’s defensive mechanisms. 


How copilot is revolutionising business process automation and efficiency

Copilot is essential for optimising operations in addition to increasing productivity. Companies frequently struggle with inefficiencies brought on by human error and manual processes. Copilot ensures seamless operations and lowers the possibility of errors by automating these activities. For instance, automation of customer service. According to a survey, 72% of consumers believe that agents should automatically be aware of their personal information and service history. Customer relationship management (CRM) systems can incorporate Copilot to give agents real-time information and recommendations, guaranteeing a customised and effective service experience. The efficiency of customer support operations is further enhanced by intelligent routing of questions and automated responses. ... For example, Copilot can forecast performance, assess market trends, and provide investment recommendations in the financial industry. Deloitte claims that artificial intelligence (AI) can save operating costs in the finance sector by as much as 20%. Copilot’s automated data analysis and accurate recommendation engine help financial organisations remain ahead of the curve and confidently make strategic decisions.


Is your data center earthquake-proof?

Leuce explains that when Colt DCS designs the layout of a data center, it ensures the most critical parts, such as the data halls, electrical rooms, and other ancillary rooms required for business continuity, are placed on the isolation base. Other elements, such as generators, which are often designed to withstand an earthquake, can then be placed directly on the ground. ... A final technique employed by Colt DCS is the use of dampers – hydraulic devices that dissipate the kinetic energy of seismic events and cushion the impact between structures. Having previously deployed lead dampers at its first data center in Inzai, Japan, Colt’s has gone a step further at its most recently built facility in Keihanna, Japan, where it is using a combination of an oil damper made out of naturally laminated rubber plus a friction pendulum system, a type of base isolation that allows you to damp both vertically and horizontally. “The reason why we mix the friction pendulum with the oil damper is because with the oil damper, you can actually control the frequency in the harmonics pulsation of the building, depending on the viscosity of the oil, while the friction pendulum does the job of dampening the energy in both directions, so you bring both technologies together,” Leuce explains.


Digital IDV standards, updated regulation needed to fight sophisticated cybercrime

In the face of rising fraud and technological advancements, there is a growing consensus on the need for innovative approaches to financial security. As argued in a recent Forbes article, the upcoming election season presents an opportunity to rethink the ecosystem that supports financial innovation. In the article, Penny Lee, president and CEO of the Financial Technology Association (FTA), advocates for policies that foster technological advancements while ensuring robust regulatory frameworks to protect consumers from emerging threats. ... Amidst these challenges, the payments industry is experiencing a surge in innovation aimed at combating fraud and enhancing security. Real-time payments and secure digital identity systems are at the forefront of these efforts. The U.S. Payments Forum Summer Market Snapshot highlights a growing interest in real-time payments systems, which enable instant transfer of funds and provide businesses and consumers with immediate access to their money. These systems are designed to improve cash flow management and reduce the risk of fraud through enhanced authentication measures.


Transformer AI Is The Heathcare System's Way Forward

Transformer-based LLMs are adapting quickly to the amount of medical information the NHS deals with per patient and on a daily basis. The size of the ‘context windows’, or input, is expanding to accommodate larger patient files, critical for quick analysis of medical notes and more efficient decision making by clinical teams. Beyond speed, these models serve well for quality of output, which can lead to more optimal patient care. An ‘attention mechanism’ learns how different inputs relate to each other. In a medical context, this can include the interactions of different drugs in a patient’s record. It can find relationships between medicines and certain allergies, predicting the outcome of this interaction on the patient’s health. As more patient records become electronic, the larger training sets will allow LLMs to become more accurate. These AI models can do what takes humans hours of manual effort – sifting through patient notes, interpreting medical records and family history and understanding relationships between previous conditions and treatments. The benefit of having this system in place is that it creates a full, contextual picture of a patient that helps clinical teams make quick decisions about treatment and counsel.



Quote for the day:

"Are you desperate or determined? With desperation comes frustration; With determination comes purpose achievement, and peace." -- James A. Murphy

Daily Tech Digest - August 29, 2024

The human factor in the industrial metaverse

The virtualisation of factories might ensure additional efficiencies, but it has the potential to fundamentally alter the human dynamics within an organisation. With rising reliance on digital tools, it gets challenging to maintain the human aspects of work. ... Just like evolving innovation is crucial, so is organisational culture. Leaders must promote a culture that supports agility, innovation, and continuous learning to ensure success in a virtual factory environment. This can be achieved by being transparent, encouraging experimentation, and recognising and rewarding an employee’s creativity and adaptability. With the rapid evolution of virtual factories employees must undergo comprehensive training that covers both technical and soft skills to adapt to the virtual environment. While practical, hands-on exercises are crucial for real-world application, it’s also important to have continuous learning with ongoing workshops, online training, and cross-training opportunities. To further enhance knowledge sharing, establishing mentorship and peer-learning programs can ensure a smooth transition, fostering a cohesive and productive workforce.


Challenging The Myths of Generative AI

The productivity myth suggests that anything we spend time on is up for automation — that any time we spend can and should be freed up for the sake of having even more time for other activities or pursuits — which can also be automated. The importance and value of thinking about our work and why we do it is waved away as a distraction. The goal of writing, this myth suggests, is filling a page rather than the process of thought that a completed page represents. ... The prompt myth is a technical myth at the heart of the LLM boom. It was a simple but brilliant design stroke: rather than a window where people paste text and allow the LLM to extend it, ChatGPT framed it as a chat window. We’re used to chat boxes, a window that waits for our messages and gets a (previously human) response in return. In truth, users provide words that dictate what we get back. ... Intelligence myths arise from the reliance on metaphors of thinking in building automated systems. These metaphors – learning, understanding, and dreaming – are helpful shorthand. But intelligence myths rely on hazy connections to human psychology. They often conflate AI systems inspired by models of human thought for a capacity to think.


The New Frontiers of Cyber-Warfare: Insights From Black Hat 2024

Corporate sanctions against nations are just one aspect of the broader issue. Moss also spoke about a new kind of trade war, where nation-states are pushing back against big tech companies and their political and economic agendas – along with the agendas of countries where these companies are based. Moss noted that countries are now using digital protectionist policies to wage what he called "a new way to escalate." He cited India's 2020 ban on TikTok, which resulted in China’s ByteDance reportedly facing up to $6 billion in losses. Moss also discussed the phenomenon of “app diplomacy,” where governments dictate to big tech companies like Apple and Google which apps are permitted in their markets. He mentioned the practice of “tech sorting,” where countries try to maintain strict control over foreign tech through redirection, throttling, or direct censorship. ... Shifting from concerns over AI to the emerging weapons of cyber espionage and warfare, Moss, moderating Black Hat’s wrap-up discussion, brought up the growing threat of hardware attacks. He asked Jos Wetzels, partner at Midnight Blue, to discuss the increasing accessibility of electromagnetic (EM) and laser weapons.


5 best practices for running a successful threat-informed defense in cybersecurity

Assuming organizations are doing vulnerability scanning across systems, applications, attack surfaces, cloud infrastructure, etc., they will come up with lists of tens of thousands of vulnerabilities. Even big, well-resourced enterprises can’t remediate this volume of vulnerabilities in a timely fashion, so leading firms depend upon threat intelligence to guide them into fixing those vulnerabilities most likely to be exploited presently or in the near future. ... As previously mentioned, a threat-informed defense involves understanding adversary TTPs, comparing these TTPs to existing defenses, identifying gaps, and then implementing compensating controls. These last steps equate to reviewing existing detection rules, writing new ones, and then testing them all to make sure they detect what they are supposed to. Rather than depending on security tool vendors to develop the right detection rules, leading organizations invest in detection engineering across multiple toolsets such as XDR, email/web security tools, SIEM, cloud security tools, etc. CISOs I spoke with admit that this can be difficult and expensive to implement. 


Let’s Bring H-A-R-M-O-N-Y Back Into Our Tech Tools

The focus of a platform approach is on harmonized experiences: a state of balance, agreement and even pleasant interaction among the various elements and stakeholders involved in development. There needs to be a way to make it easy and enjoyable to build, test and release at the pace of today’s business without the annoying dependencies that bog down developers along the way — on both the application and infrastructure sides. I believe tool stacks and platforms that use a harmony-focused method can even bring the fun back into development. ... Resilience refers to the ability to withstand and recover from failures and disruptions, and you can’t follow a harmonized approach without it. A resilient architecture is designed to handle unexpected challenges — be they spikes in traffic, hardware malfunctions or software bugs — without compromising core functionality. How do you create resiliency? Through running, testing and debugging your code to catch errors early and often. Building a robust testing foundation can look like having a dedicated testing environment and ephemeral testing features. 


Cybersecurity Maturity: A Must-Have on the CISO’s Agenda

The process of maturation in personnel is often reflected in the way these teams are measured. Less mature teams tend to be measured on activity metrics and KPIs around how many tickets are handled and closed, for example. In more mature organisations the focus has shifted towards metrics like team satisfaction and staff retention. This has come through strongly in our research. Last year 61% of cybersecurity professionals surveyed said that the key metric they used to assess the ROI of cybersecurity automation was how well they were managing the team in terms of employee satisfaction and retention – another indication that it is reaching a more mature adoption stage. Organizations with mature cybersecurity approaches understand that tools and processes need to be guided through the maturity path, but that the reason for doing so is to serve the people working with them. The maturity and skillsets of teams should also be reviewed, and members should be given the opportunity to add their own input. What is their experience of the tools and processes in place? Do they trust the outcomes they are getting from AI- and machine learning-powered tools and processes? 


What can my organisation do about DDoS threats?

"Businesses can prevent attacks using managed DDoS protection services or through implementing robust firewalls to filter malicious traffic and deploying load balancers to distribute traffic evenly when under heavy load,” advises James Taylor, associate director, offensive security practice, at S-RM. “Other defences include rate limiting, network segmentation, anomaly detection systems and implementing responsive incident management plans.” But while firewalls and load balancers may stop some of the more basic DDoS attack types, such as SYN floods or fragmented packet attacks, they are unlikely to handle more sophisticated DDoS attacks which mimic legitimate traffic, warns Donny Chong, product and marketing director at DDoS specialist Nexusguard. “Businesses should adopt a more comprehensive approach to DDoS mitigation such as managed services,” he says. “In this setup, the most effective approach is a hybrid one, combining cloud-based mitigation with on-premises hardware which be managed externally by the DDoS specialist provider. It also combines robust DDoS mitigation with the ability to offload traffic to the designated cloud provider as and when needed.”


How Aspiring Software Developers Can Stand Out in a Tight Job Market: 5 FAQs

While technical skills are critical, the ability to listen to clients, understand their problems and translate technical information into simple language is also important. Without reliable soft skills, clients may doubt your ability to address their needs. Employers also want candidates who can collaborate and work effectively in a team setting. This involves taking initiative, having strong written and verbal communication skills and being proactive about sharing status updates. Demonstrate these skills by discussing how you applied them in college extracurriculars or in the classroom as part of group project work, and how you plan to apply them in the workplace. In a highly competitive job market, doing so may set you apart from other candidates who offer similar technical backgrounds. ... Research the company before applying for a role so you're prepared with thoughtful questions for your interview. For example, you might want to ask about the new hire onboarding process, professional development opportunities, company culture or specific questions regarding a project the interviewer has recently worked on.


Bridging the AI Gap: The Crucial Role of Vectors in Advancing Artificial Intelligence

Vector databases have recently emerged into the spotlight as the go-to method for capturing the semantic essence of various entities, including text, images, audio, and video content. Encoding this diverse range of data types into a uniform mathematical representation means that we can now quantify semantic similarities by calculating the mathematical distance between these representations. This breakthrough enables “fuzzy” semantic similarity searches across a wide array of content types. While vector databases aren’t new and won’t resolve all current data challenges, their ability to perform these semantic searches across vast datasets and feed that information to LLMs unlocks previously unattainable functionality. ... We are in the early stages of leveraging vectors, both in the emerging generative AI space and the classical ML domain. It’s important to recognise that vectors don’t come as an out-of-the-box solution and can’t simply be bolted onto existing AI or ML programs. However, as they become more prevalent and universally adopted, we can expect the development of software layers that will make it easier for less technical teams to apply vector technology effectively.


AI Can Reshape Insight Delivery and Decision-making

Moving on to risk, Tubbs shares that AI plays a pivotal role in the organizational risk mitigation strategy. With AI, the organization can identify potential risks and propose countermeasures that can significantly contribute to business stability. Therefore, Visa can be proactive in fighting fraud and risks, specifically in the payment landscape. Another usage of AI at Visa is in making real-time decisions with real-time analytics. Given the billions of transactions a month, real-time analytics enable the organization to comprehend what the transactions mean and how to make prompt decisions around anomalous behavior. AI also fosters collaboration in the ecosystem and organization by encouraging different teams to work towards a shared objective. Summing up, she refers to the cost-saving aspect of AI and maintains that Visa is driven to automate processes that have taken a significant amount of time historically. Shifting to the other side of good AI, Tubbs affirms that AI can also be used by fraudsters for nefarious reasons. To avoid that, Visa constantly evaluates its models and algorithms. She notes that Visa has a dedicated team to look into the dark web to understand the actions of fraudsters.



Quote for the day:

"Successful and unsuccessful people do not vary greatly in their abilities. They vary in their desires to reach their potential." -- John Maxwell

Daily Tech Digest - August 28, 2024

Improving healthcare fraud prevention and patient trust with digital ID

Digital trust involves the use of secure and transparent technologies to protect patient data while enhancing communication and engagement. For example, digital consent forms and secure messaging platforms allow patients to communicate with their healthcare providers conveniently while ensuring that their data remains protected. Furthermore, integrating digital trust technology into healthcare systems can streamline administrative processes, reduce paperwork, and minimize the chances of errors, according to a blog post by Five Faces. This not only enhances operational efficiency but also improves the overall patient experience by reducing wait times and simplifying access to medical services. ... These smart cards, embedded with secure microchips, store vital patient information and health insurance details, enabling healthcare providers to access accurate and up-to-date information during consultations. The use of chip-based ID cards reduces the risk of identity theft and fraud, as these cards are difficult to duplicate and require secure authentication methods. This technology ensures that only authorized individuals can access patient information, thereby protecting sensitive data from unauthorized access.


A CEO's Take on AI in the Workforce

Those ignoring the AI transformation and not uptraining their skilled staff are not putting themselves in a position to make use of untapped data that can provide insights into other areas of opportunity for their business. Making minimal-to-no investments in emerging technology merely delays the inevitable and puts companies at a disadvantage at the hands of their competitors. Alternatively, being too aggressive with AI can lead to security vulnerabilities or critical talent loss. While AI integration is critical to accelerating business outputs, doing so without moderators, data safeguards, and regulators to keep organizations in line with data governance and compliance is actually exposing companies to security issues. ... AI should not replace people, but rather presents an opportunity to better utilize them. AI can help solve time-management and efficiency issues across organizations, allowing skilled people to focus on creative and strategic roles or projects that drive better business value. The role of AI should focus on automating time-consuming, repetitive, administrative tasks, thereby leaving individuals to be more calculated and intentional with their time.


The promise of open banking: How data sharing is changing financial services

The benefits of open banking are multifaceted. Customers gain greater control over their financial data, allowing them to securely share it with authorized providers. This empowers them to explore a wider range of customized financial products and services, ultimately promoting financial stability and well-being. Additionally, open banking fosters innovation within the industry, as Fintech companies leverage customer-consented data to develop cutting-edge solutions. The Account Aggregator (AA) framework, regulated by the Reserve Bank of India (RBI), is a cornerstone of open banking in India. AAs act as trusted intermediaries, allowing users to consolidate their financial data from various sources, including banks, mutual funds, and insurance companies, into a single platform. ... APIs empower platforms to aggregate FD offerings from a multitude of banks across India. This provides investors with a comprehensive view of available options, allowing them to compare interest rates, tenures, minimum deposit requirements, and other features within a single platform. This transparency empowers informed decision-making, enabling investors to select the FD that best aligns with their risk appetite and financial goals.


What are the realistic prospects for grid-independent AI data centers in the UK?

Already colo companies looking to develop in the UK are evaluating on-site gas engine power generation and CHP (combined heat and power). To date, UK CHP projects have been hampered by a lack of grid capacity. Microgrid developments are viewed as a solution to this. CHP and microgrids should also make data center developments more appealing for local government planning departments. ... Data center developments have hit front-line politics with Rachel Reeves, the new UK Labour government’s Chancellor of the Exchequer (Finance Minister) citing data center infrastructure and reform of planning law as critical to growing the country’s economy. Already some projects that were denied planning permission look likely to be reconsidered with reports that “Deputy Prime Minister Angela Rayner" had “recovered two planning appeals for data centers in Buckinghamshire and Hertfordshire (already)”. It seems clear that to have any realistic chance of meeting data center capacity demand for AI, cloud and other digital services will require on-site power generation in some form or other. 


Why Every IT Leader Needs a Team of Trusted Advisors

When seeking advisors, look for individuals with the time and willingness to join your kitchen cabinet, Kelley says. "Be mindful of their schedules and obligations, since they are doing you a favor," he notes. Additionally, if you're offering any perks, such as paid meals, travel reimbursement, or direct monetary payments, let them know upfront. Such bonuses are relatively rare, however. "More than likely, you’re talking about individual or small group phone calls or meetings." Above all, be honest and open with your team members. "Let them know what kind of help you need and the time frame you are working under," Kelley says. "If you've heard different or contradictory advice from other sources, bring it up and get their reaction," he recommends. Keep in mind that an advisory team is a two-way relationship. Kelley recommends personalizing each connection with an occasional handwritten note, book, lunch, or ticket to a concert or sporting event. On the other hand, if you decide to ignore their input or advice, you need to explain why, he suggests. Otherwise, they might conclude that being a team participant is a waste of time. Also be sure to help your team members whenever they need advice or support. 


Why CI and CD Need to Go Their Separate Ways

Continuous promotion is a concept designed to bridge the gap between CI and CD, addressing the limitations of traditional CI/CD pipelines when used with modern technologies like Kubernetes and GitOps. The idea is to insert an intermediary step that focuses on promotion of artifacts based on predefined rules and conditions. This approach allows more granular control over the deployment process, ensuring that artifacts are promoted only when they meet specific criteria, such as passing certain tests or receiving necessary approvals. By doing so, continuous promotion decouples the CI and CD processes, allowing each to focus on its core responsibilities without overextension. ... Introducing a systematic step between CI and CD ensures that only qualified artifacts progress through the pipeline, reducing the risk of faulty deployments. This approach allows the implementation of detailed rule sets, which can include criteria such as successful test completions, manual approvals or compliance checks. As a result, continuous promotion provides greater control over the deployment process, enabling teams to automate complex decision-making processes that would otherwise require manual intervention.


CIOs listen up: either plan to manage fast-changing certificates, or fade away

Even when organizations finally decide to set policies and standardize security for new deployments, mitigating the existing deployments is a huge effort, and in the modern stack, there’s no dedicated operations team, he says. That makes it more important for CIOs to take ownership of the problem, Cairns points out. “Especially in larger, more complex and global organizations, the magnitude of trying to push these things through the organization is often underestimated,” he says. “Some of that is having a good handle on the culture and how to address these things in terms of messaging, communications, enforcement of the right policies and practices, and making sure you’ve got the proper stakeholder buy-in at the various points in this process — a lot of governance aspects.” ... Many large organizations will soon need to revoke and reprovision TLS certificates at scale. One in five Fortune 1000 companies use Entrust as their certificate authority, and from November 1, 2024, Chrome will follow Firefox in no longer trusting TLS certificates from Entrust because of a pattern of compliance failures, which the CA argues were, ironically, sometimes caused by enterprise customers asking for more time to deal with revocation. 


Effortless Concurrency: Leveraging the Actor Model in Financial Transaction Systems

In a financial transaction system, the data flow for handling inbound payments involves multiple steps and checks to ensure compliance, security, and accuracy. However, potential failure points exist throughout this process, particularly when external systems impose restrictions or when the system must dynamically decide on the course of action based on real-time data. ... Implementing distributed locks is inherently more complex, often requiring external systems like ZooKeeper, Consul, Hazelcast, or Redis to manage the lock state across multiple nodes. These systems need to be highly available and consistent to prevent the distributed lock mechanism from becoming a single point of failure or a bottleneck. ... In this messaging based model, communication between different parts of the system occurs through messages. This approach enables asynchronous communication, decoupling components and enhancing flexibility and scalability. Messages are managed through queues and message brokers, which ensure orderly transmission and reception of messages. ... Ensuring message durability is crucial in financial transaction systems because it allows the system to replay a message if the processor fails to handle the command due to issues like external payment failures, storage failures, or network problems.


Hundreds of LLM Servers Expose Corporate, Health & Other Online Data

Flowise is a low-code tool for building all kinds of LLM applications. It's backed by Y Combinator, and sports tens of thousands of stars on GitHub. Whether it be a customer support bot or a tool for generating and extracting data for downstream programming and other tasks, the programs that developers build with Flowise tend to access and manage large quantities of data. It's no wonder, then, that the majority of Flowise servers are password-protected. ... Leaky vector databases are even more dangerous than leaky LLM builders, as they can be tampered with in such a way that does not alert the users of AI tools that rely on them. For example, instead of just stealing information from an exposed vector database, a hacker can delete or corrupt its data to manipulate its results. One could also plant malware within a vector database such that when an LLM program queries it, it ends up ingesting the malware. ... To mitigate the risk of exposed AI tooling, Deutsch recommends that organizations restrict access to the AI services they rely on, monitor and log the activity associated with those services, protect sensitive data trafficked by LLM apps, and always apply software updates where possible.


Generative AI vs. Traditional AI

Traditional AI, often referred to as “symbolic AI” or “rule-based AI,” emerged in the mid-20th century. It relies on predefined rules and logical reasoning to solve specific problems. These systems operate within a rigid framework of human-defined guidelines and are adept at tasks like data classification, anomaly detection, and decision-making processes based on historical data. In sharp contrast, generative AI is a more recent development that leverages advanced ML techniques to create new content. This form of AI does not follow predefined rules but learns patterns from vast datasets to generate novel outputs such as text, images, music, and even code. ... Traditional AI relies heavily on rule-based systems and predefined models to perform specific tasks. These systems operate within narrowly defined parameters, focusing on pattern recognition, classification, and regression through supervised learning techniques. Data fed into these models is typically structured and labeled, allowing for precise predictions or decisions based on historical patterns. In contrast, generative AI uses neural networks and advanced ML models to produce human-like content. This approach leverages unsupervised or semi-supervised learning techniques to understand underlying data distributions.



Quote for the day:

"Opportunities don't happen. You create them." -- Chris Grosser

Daily Tech Digest - August 27, 2024

Quantum computing attacks on cloud-based systems

Enterprises should indeed be concerned about the advancements in quantum computing. Quantum computers have the potential to break widely used encryption protocols, posing risks to financial data, intellectual property, personal information, and even national security. However, this reaction to danger goes well beyond NIST releasing quantum-resistant algorithms; it’s also crucial for enterprises to start transitioning today to new forms of encryption to future-proof their data security. As other technology advancements arise and enterprises run from one protection to another, work will begin to resemble Whac-A-Mole. I suspect many enterprises will be unable to whack that mole in time, will lose the battle, and be forced to absorb a breach. ... Although quantum computing represents a groundbreaking shift in computational capabilities, the way we address its challenges transcends this singular technology. It’s obvious we need a multidisciplinary approach to managing and leveraging all new advancements. Organizations must be able to anticipate technological disruptions like quantum computing and also become adaptable enough to implement solutions rapidly. 


QA and Testing: The New Cybersecurity Frontline

The convergence of Security, QA, and DevOps is pivotal in the evolution of software security. These teams, often interdependent, share the common objective of minimizing software defects. While security teams may not possess deep QA expertise and QA professionals might lack cybersecurity specialization, their collaborative efforts are essential for a lock-tight security approach. ... Automated testing tools can quickly identify common vulnerabilities and ensure that security measures are consistently applied across all code changes. Meanwhile, manual testing allows for more nuanced assessments, particularly in identifying complex issues that automated tools might miss. The best QA processes rely on both methods working in concert to ensure consistent and comprehensive testing coverage for all releases. While QA focuses on identifying and rectifying functional bugs, cybersecurity experts concentrate on vulnerabilities and weaknesses that could be exploited. By incorporating security testing, such as Mobile Application Security Testing (MAST), into the QA process, teams can proactively address security risks, recognize the importance of security, and prioritize threat prevention alongside quality improvements, enhancing the overall quality and reliability of the software.


Bridging the Divide: How to Foster the CIO-CFO Partnership

Considering today’s evolving business and regulatory landscape, such as the SEC Cybersecurity Ruling and internal focus on finance transformation, a strong CIO-CFO relationship is especially critical. For cybersecurity, the CIO historically focused on managing the organization's technological infrastructure and developing robust security measures, while the CFO concentrated on financial oversight and regulatory compliance. However, the SEC's ruling mandates the timely disclosure of material cybersecurity incidents, requiring a bridge between roles and the need for closer collaboration. The new regulation demands a seamless integration of the CIO’s expertise in identifying and assessing cyber threats with the CFO’s experience in understanding financial implications and regulatory requirements. This means cybersecurity is no longer seen as solely a technology issue but as a critical part of financial risk management and corporate governance. By working closely together, the CIO and CFO can create clear communication channels, shared responsibilities, and joint accountability for incident response and disclosure processes. 


Rising cloud costs leave CIOs seeking ways to cope

Cloud costs have risen for many of CGI’s customers in the past year. Sunrise Banks, which operates community banks and a fintech service, has also seen cloud costs increase recently, says CIO Jon Sandoval. The company is a recent convert to cloud computing; it replaced its own data centers with the cloud just over a year ago, he says. Cloud providers aren’t the only culprits, he says. “I’ve seen increases from all of our applications and services that that we procure, and a lot of that’s just dealing with the high levels of inflation that we’ve experienced over the past couple years,” he adds. “Labor, cost of goods — everything has gotten more expensive.” ... Cloud cost containment requires “assertive and sometimes aggressive” measures, adds Trude Van Horn, CIO and executive vice president at Rimini Street, an IT and security strategy consulting firm. Van Horn recommends that organizations name a cloud controller, whose job is to contain cloud costs. “The notion of a cloud controller requires a savvy and assertive individual — one who knows a lot about cloud usage and your particular cloud landscape and is responsible to monitor trends, look for overages, manage against the budget,” she says.


Zero-Touch Provisioning for Power Management Deployments

At the heart of ZTP lies Dynamic Host Configuration Protocol (DHCP), a foundational network protocol that assigns IP addresses to devices (clients) on a network, facilitating their communication within the network and with external systems. DHCP is an essential network protocol used in IP networks to dynamically assign IP addresses and other network configuration parameters to devices, thereby simplifying network administration. DHCP's capabilities extend beyond basic IP address assignment in providing various configuration details to devices via DHCP options. These options are instrumental in ZTP, allowing devices to automatically receive critical configuration information, including network settings, server addresses, and paths to configuration files. By utilizing DHCP options, devices can self-configure and integrate into the network seamlessly with "zero touch." With DHCP functionalities, ZTP can be utilized to automate the commissioning and configuration of critical power devices such as uninterruptible power systems (UPSs) and power distribution units (PDUs). Network interfaces can be leveraged in conjunction with ZTP for advanced connectivity and management features. 


Exclusive: Gigamon CEO highlights importance of deep observability

The importance of deep observability is heightened as companies undergo digital transformation, often moving workloads into virtualized environments or public clouds. This shift can increase risks related to compliance and security. Gigamon's deep observability helps CIOs move application workloads without compromising security. "You can maintain your security posture regardless of where the workload moves," Buckley said. "That's a really powerful capability for organizations today." Overall, the deep observability market grew 61 percent in 2023 and continued to expand as organizations increasingly embrace hybrid cloud infrastructure, with a forecasted CAGR of 40 percent and projected revenue of nearly $2B in 2028, according to research firm 650 Group. "CIOs are moving workloads to wherever it makes the organization more effective and efficient, whether that's public cloud, on-premises, or a hybrid approach," Buckley explained. "The key is to ensure there's no increased risk to the organization, and the security profile remains constant."


Prioritizing your developer experience roadmap

The biggest points of friction will be an ongoing process, but, as he said, “A lot of times, engineers have been at a place for long enough where they’ve developed workarounds or become used to problems. It’s become a known experience. So we have to look at their workflow to see what the pebbles are and then remove them.” Successful platform teams pair program with their customers regularly. It’s an effective way to build empathy. Another thing to prioritize is asking: Is this affecting just one or two really vocal teams or is it something systemic across the organization? ... Another way that platform engineering differs from the behemoth legacy platforms is that it’s not a giant one-off implementation. In fact, Team Topologies has the concept of Thinnest Viable Platform. You start with something small but sturdy that you can build your platform strategy on top of. For most companies, the biggest time-waster is finding things. Your first TVP is often either a directory of who owns what or better documentation. But don’t trust that instinct — ask first. Running a developer productivity survey will let you know what the biggest frustrations are for your developers. Ask targeted questions, not open-ended ones. 


How to prioritize data privacy in core customer-facing systems

Before creating a data-sharing agreement with a third party, review the organization’s data storage, collection and transfer safeguards. Verify that the organization’s data protection policies are as robust as yours. Further, when drafting an eventual agreement, ensure that contract terms dictate a superior level of protection, delineating the responsibilities and expectations of each party in terms of compliance and cybersecurity. Due diligence on the front half of a relationship is necessary. However, it’s also essential to maintain an open line of communication after the partnership commences. Organizations should regularly reassess their partners’ commitments to data privacy by inquiring about their ongoing data protection policies, including data storage timelines and the intent of using said data. ... Most customers can opt out of data collection and tracking at any time. This preference is known as “consent” — and enabling its collection is only half the journey. Organizations must also proactively enforce consent to ensure that downstream data routing doesn’t jeopardize or invalidate a customer’s express preferences.


Choosing a Data Quality Tool: What, Why, How

Data quality describes a characteristic or attribute of the data itself, but equally important for achieving and maintaining the quality of data is the ability to monitor and troubleshoot the systems and processes that affect data quality. Data observability is most important in complex, distributed data systems such as data lakes, data warehouses, and cloud data platforms. It allows companies to monitor and respond in real time to problems related to data flows and the data elements themselves. Data observability tools provide visibility into data as it traverses a network by tracking data lineages, dependencies, and transformations. The products send alerts when anomalies are detected, and apply metadata about data sources, schemas, and other attributes to provide a clearer understanding and more efficient management of data resources. ... A company’s data quality efforts are designed to achieve three core goals: Promote collaboration between IT and business departments; Allow IT staff to manage and troubleshoot all data pipelines and data systems, whether they’re completely internal or extend outside the organization; Help business managers manipulate the data in support of their work toward achieving business goals.


Businesses increasingly turn to biometrics for physical access and visitor management

Experts suggest that to address these concerns, employers need to be more transparent about their use of biometric technologies and implement robust safeguards to protect employees’ data. This includes informing employees about how their data will be used, stored, and protected from potential breaches. Employers should also offer alternatives for those who are uncomfortable with biometric systems to ensure no employee feels coerced. Companies that prioritize transparency, consent, and data protection are more likely to gain employee trust and avoid backlash. However, without clear guidelines and protections, resistance to workplace biometrics is likely to grow. “Education needs to be laid out very clearly and regularly that, ‘Look, biometrics is not an invasion of privacy,” adds Murad. “It’s providing an envelope of security for your privacy, it’s protecting it.’ I think that message is getting there, but it’s taking time.” Several companies have recently introduced new physical access security technologies. Nabla Works has launched advanced facial recognition tools with anti-spoofing features for secure access across various applications



Quote for the day:

"It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change." -- Charles Darwin

Daily Tech Digest - August 26, 2024

The definitive guide to data pipelines

A key data pipeline capability is to track data lineage, including methodologies and tools that expose data’s life cycle and help answer questions about who, when, where, why, and how data changes. Data pipelines transform data, which is part of the data lineage’s scope, and tracking data changes is crucial in regulated industries or when human safety is a consideration. ... Other data catalog, data governance, and AI governance platforms may also have data lineage capabilities. “Business and technical stakeholders must equally understand how data flows, transforms, and is used across sources with end-to-end lineage for deeper impact analysis, improved regulatory compliance, and more trusted analytics,” says Felix Van de Maele, CEO of Collibra. The data ops behind data pipelines When you deploy pipelines, how do you know whether they receive, transform, and send data accurately? Are data errors captured, and do single-record data issues halt the pipeline? Are the pipelines performing consistently, especially under heavy load? Are transformations idempotent, or are they streaming duplicate records when data sources have transmission errors?


Living with trust issues: The human side of zero trust architecture

As we’ve become more dependent on technology, IT environments have become more complex. This has made threats more intense and could even pose a serious danger. To tackle these growing security challenges — which needed a stronger and more flexible approach — industry experts, security practitioners, and tech providers came together to develop the zero trust architecture (ZTA) framework. This development led to a growing recognition of the importance of prioritizing verification over trust, which made ZTA a cornerstone of modern cybersecurity strategies. The main idea behind ZTA is to “never trust, always verify.” ... Implementing the ZTA framework means that every action the IT and security teams handle is filtered through a security-first lens. However, the over-repeated mantra of “never trust, always verify” may affect the psychological well-being of those implementing it. Imagine spending hours monitoring every network activity while constantly questioning if the information is genuine and if people’s motives are pure. This suspicious climate not only affects the work environment but also spills over into personal interactions, affecting trust with others. 


Top technologies that will disrupt business in 2025

Chaplin finds ML useful for identifying customer-related trends and predicting outcomes. That sort of forecasting can help allocate resources more effectively, he says, and engage customers better — for example when recommending products. “While gen AI undoubtedly has its allure, it’s important for business leaders to appreciate the broader and more versatile applications of traditional ML,” he says. ... What Skillington touches on is the often-overlooked facet of any successful digital transformation: It all starts with data. By breaking down data silos, establishing wholistic data governance strategies, developing the right data architecture for the business, and developing data literacy across disciplines, organizations can not only gain better access to their data but also better understand how ... Edge computing and 5G are two complementary technologies that are maturing, getting smaller, and delivering tangible business results securely, says Rogers Jeffrey Leo John, CTO and co-founder of DataChat. “Edge devices such as mobile phones can now run intensive tasks like AI and ML, which were once only possible in data centers,” he says. 


Meta presents Transfusion: A Recipe for Training a Multi-Modal Model Over Discrete and Continuous Data

Transfusion is trained on a balanced mixture of text and image data, with each modality being processed through its specific objective: next-token prediction for text and diffusion for images. The model’s architecture consists of a transformer with modality-specific components, where text is tokenized into discrete sequences and images are encoded as latent patches using a variational autoencoder (VAE). The model employs causal attention for text tokens and bidirectional attention for image patches, ensuring that both modalities are processed effectively. Training is conducted on a large-scale dataset consisting of 2 trillion tokens, including 1 trillion text tokens and 692 million images, each represented by a sequence of patch vectors. The use of U-Net down and up blocks for image encoding and decoding further enhances the model’s efficiency, particularly when compressing images into patches. Transfusion demonstrates superior performance across several benchmarks, particularly in tasks involving text-to-image and image-to-text generation. 


AI Assistants: Picking the Right Copilot

The best assistant operates as an agent that understands what context the underlying AI can assume from its known environment. IDE assistants such as GitHub Copilot know that they are responding with programming projects in mind. GitHub Copilot examines script comments as well as syntax in a given script before crafting a suggestion. The tool examines syntax and comments against its trained datasets, consisting of GPT training and the codebase of GitHub's public repositories. GitHub Copilot was trained on the public repositories in GitHub, so it has a slightly different "perspective" on syntax than that of ChatGPT ADA. Thus, the choice of corpus for an AI model can influence what answer an AI assistant yields to users. A good AI assistant should offer a responsive chat feature to indicate its understanding of its environment. Jupyter, Tabnine, and Copilot all offer a native chat UI for the user. The chat experience influences how well a professional feels the AI assistant is working. How well it interprets prompts and how accurate the suggestions are all start with the conversational assistant experience, so technical professionals should note their experiences to see which assistant works best for their projects.


Is the vulnerability disclosure process glitched? How CISOs are being left in the dark

The elephant in the room regarding misaligned motives and communications between researchers and software vendors is that vendors frequently try to hide or downplay the bugs that researchers feel obligated to make public. “The root cause is a deep-seated fear and prioritizing reputation over security of users and customers,” Rapid7’s Condon says. “What it comes down to many times is that organizations are afraid to publish vulnerability information because of what it might mean for them legally, reputationally, and financially if their customers leave. Without a concerted effort to normalize vulnerability disclosure to reward and incentivize well-coordinated vulnerability disclosure, we can pick at communication all we want. Still, the root cause is this fear and the conflict that it engenders between researchers and vendors.” Condon is, however, sympathetic to the vendors’ fears. “They don’t want any information out there because they are understandably concerned about reputational damage. They’re seeing major cyberattacks in the news, CISOs and CEOs dragged in front of Congress or the Senate here in the US, and lawsuits are coming out against them. ...”


Level Up Your Software Quality With Static Code Analysis

Behind high-quality software is high-quality code. The same core coding principles remain true regardless of how the code was written, either by humans or AI coding assistants. Code must be easy to read, maintain, understand and change. Code structure and consistency should be robust and secure to ensure the application performs well. Code devoid of issues helps you attain the most value from your software. ... While static analysis focuses on code quality and reduces the number of problems to be found later in the testing stage, application testing ensures that your software actually runs as it was designed. By incorporating both automated testing and static analysis, developers can manage code quality through every stage of the development process, quickly find and fix issues and improve the overall reliability of their software. A combination of both is vital to software development. In fact, a good static analysis tool can even be integrated into your testing tools to track and report the percentage of code covered by your unit tests. Sonar recommends a test code coverage of 80% or your code will fail to pass the recommended standard.


Two strategies to protect your business from the next large-scale tech failure

The key to mitigating another large-scale system failure is to plan for catastrophic events and practice your response. Make dealing with failure part of normal business practices. When failure is unexpected and rare, the processes to deal with it are untested and may even result in actions which make the failure worse. Build a network and a team that can adapt and react to failures. Remember when insurance companies ran their own data centres and disaster recovery tests were conducted twice a year? ... The second strategy for minimizing large-scale failures is to avoid the software monoculture created by the concentration of digital tech suppliers. It’s more complex but worth it. Some corporations have a policy of buying their core networking equipment from three or four different vendors. Yes, it makes day-to-day management a little more difficult, but they have the assurance that if one vendor has a failure, their entire network is not toast. Whether it’s tech or biology, a monoculture is extremely vulnerable to epidemics which can destroy the entire system. In the CrowdStrike scenario, if corporate networks had been a mix of Windows, Linux and other operating systems, the damage would not have been as widespread.


India's Critical Infrastructure Suffers Spike in Cyberattacks

The adoption of emerging technologies such as AI and cloud and the focus on innovation and remote working has driven digital transformations, thus boosting companies' need for more security defenses, according to Manu Dwivedi, partner and leader for cybersecurity at consultancy PwC India. "AI-enabled phishing and aggressive social engineering have elevated ransomware to the top concern," he says. "While cloud-related threats are concerning, greater interconnectivity between IT and OT environments and increased usage of open-source components in software are increasing the available threat surface for attackers to exploit." Indian organizations also need to harden their systems against insider threats, which requires a combination of business strategy, culture, training, and governance processes, Dwivedi says. ... The growing demand for AI has also shaped the threat landscape in the country and threat actors have already started experimenting with different AI models and techniques, says PwC India's Dwivedi. "Threat actors are expected to use AI to generate customized and polymorphic malware based on system exploits, which escapes detection from signature-based and traditional detection methods," he says.


Architectural Patterns for Enterprise Generative AI Apps

In the RAG pattern, we integrate a vector database that can store and index embeddings (numerical representations of digital content). We use various search algorithms like HNSW or IVF to retrieve the top k results, which are then used as the input context. The search is performed by converting the user's query into embeddings. The top k results are added to a well-constructed prompt, which guides the LLM on what to generate and the steps it should follow, as well as what context or data it should consider. ... GraphRAG is an advanced RAG approach that uses a graph database to retrieve information for specific tasks. Unlike traditional relational databases that store structured data in tables with rows and columns, graph databases use nodes, edges, and properties to represent and store data. This method provides a more intuitive and efficient way to model, view, and query complex systems. ... Like the basic RAG system, GraphRAG also uses a specialized database to store the knowledge data it generates with the help of an LLM. However, generating the knowledge graph is more costly compared to generating embeddings and storing them in a vector database. 



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry