Daily Tech Digest - July 31, 2024

Rise of Smart Testing: How AI is Revolutionizing Software Testing

Development teams must use new techniques when creating and testing applications. Although traditional frameworks frequently necessitate a great deal of manual labor for script construction and maintenance, test automation can greatly increase productivity and accuracy. This may restrict their efficacy and capacity to grow. ... Agile development is based on continuous improvement, but rapid code changes can put a burden on conventional test automation techniques. This is where self-healing test automation scripts come in. The release cycle is slowed down by test cases that become delicate and need ongoing maintenance. Frameworks with AI capabilities are able to recognize these changes and adjust accordingly. This translates into shorter release cycles, less maintenance overhead, and self-healing test scripts. ... Extensive test coverage is a difficult goal to accomplish using traditional testing techniques. Artificial intelligence (AI) fills this gap by automatically generating a wide range of test cases by evaluating requirements, code, and previous tests. This covers scenarios—both good and negative—that human testers might overlook or edge cases. 


What CISOs need to keep CEOs (and themselves) out of jail

Considering the changes in the Cyber Security Framework 2.0 (CSF 2.0) emphasizing governance and communication with the board of directors, Sullivan is right to assume that liability will not stop at the CISO and will likely move upwards. In his essay, Sullivan urges CEOs to give CISOs greater resources to do their jobs. But if he’s talking about funding to purchase more security controls, this might be a hard sell for CEOs. ... CEOs would benefit from showing that they care about cybersecurity and adding metrics to company reports to demonstrate it is a significant concern. For CISOs, agreeing to a set of metrics with the CEO would provide a visible North Star and a forcing function for aligning resources and headcount to ensure metrics continue to trend in the right direction. ... CEOs that are serious about cybersecurity must prioritize collaboration with their CISOs and putting them in the rotation for regular meetings. A healthy budget increase for tools may be necessary as AI injects many new risks, but it’s not sufficient nor is it the most important step. CISOs need better people and better processes to deliver on promises of keeping the enterprise safe. 


Who should own cloud costs?

The exponential growth of AI and generative AI initiatives are often identified as the true culprits. Although packed with potential, these advanced technologies consume extensive cloud resources, increasing costs that organizations often struggle to manage effectively. The main issues usually stem from a lack of visibility and control over these expenses. The problems go beyond just tossing around the term “finops” at meetings. It comes down to a fundamental understanding of who owns and controls cloud costs in the organization. Trying to identify cloud cost ownership and control often becomes a confusing free-for-all. ... Why does giving engineering control over cloud costs make such a difference? For one, engineers are typically closer to the actual usage and deployment of cloud resources. When they build something to run on the cloud, they are more aware of how applications and data storage systems use cloud resources. Engineers can quickly identify and rectify inefficiencies, ensuring that cloud resources are used cost-effectively. Moreover, engineers with skin in the game are more likely to align their projects with broader business goals, translating technical decisions into tangible business outcomes.


Generative AI and Observability – How to Know What Good Looks Like

In software development circles, observability is often defined as the combination of logs, traces and metric data to show how applications perform. In classic mechanical engineering and control theory, observability looks at the inputs and outputs for a system to judge how changes affect the results. In practice, looking at the initial requests and what gets returned provides data that can be used for judging performance. Alongside this, there is the quality of the output to consider as well. Did the result answer the user’s question, and how accurate was the answer? Were there any hallucinations in the response that would affect the user? And where did those results come from? Tracking AI hallucination rates across different LLMs and services shows up how those services perform, where the levels of inaccuracy vary from around 2.5 percent to 22.4 percent. All the steps involved around managing your data and generative AI app can affect the quality and speed of response at runtime. For example, retrieval augmented generation (RAG) allows you to find and deliver company data in the right format to the LLM so that this context can provide a more relevant response. 


Security platforms offer help to tame product complexity, but skepticism remains

The biggest issue enterprises cited was what they saw as an inherent contradiction between the notion of a platform, which to them had the connotation of a framework on which things were built, and the specialization of most offerings. “You can’t have five foundations for one building,” one CSO said sourly, and pointed out that there are platforms for network, cloud, data center, application, and probably even physical security. While there was an enterprise hope that platforms would somehow unify security, they actually seemed to divide it. ... It seems to me that divided security responsibility, arising from the lack of a single CSO in charge, is also a factor in the platform question. Vendors who sell into such an account not only have less incentive to promote a unifying security platform vision, they may have a direct motivation not to do that. Of 181 enterprises, 47 admit that their security portfolio was created, and is sustained, by two or more organizations, and every enterprise in this group is without a CSO. Who would a security platform provider call on in these situations? Would any of the organizations involved in security want to share their decision power with another group?


The cost of a data breach continues to escalate

The type of attack influenced the financial damage, the report noted. Destructive attacks, in which the bad actors delete data and destroy systems, cost the most: $5.68 million per breach ($5.23 million in 2023). Data exfiltration, in which data is stolen, and ransomware, in which data is encrypted and a ransom demanded, came second and third, at $5.21 million and $4.91 million respectively. However, noted Fritz Jean-Louis, principal cybersecurity advisor at Info-Tech Research Group, sometimes attackers combine their tactics. “Double extortion ransomware attacks are a key factor that is influencing the cost of data breaches,” he said in an email. “Since 2023, we have observed that ransomware attacks now include double extortion attacks ... “This risk of shadow data will become even more elevated in the AI era, with data serving as the foundation on which new AI-powered applications and use-cases are being built,” added Jennifer Kady, vice president, security at IBM. “Gaining control and visibility over shadow data from a security perspective has emerged as a top priority as companies move quickly to adopt generative AI, while also ensuring security and privacy are at the forefront.”


If You are Reachable, You Are Breachable, and Firewalls & VPNs are the Front Door

It’s about understanding that the network is no longer a castle to be fortified but a conduit only, with entity-to-entity access authorized discretely for every connection based on business policies informed by the identity and context of the entities connecting. Gone are IP-based policies and ACLs, persistent tunnels, trusted and untrusted zones, and implicit trust. With a zero-trust architecture in place, the internet becomes the corporate network and point-to-point networking fades in relevance over time. Firewalls become like the mainframe – serving a diminishing set of legacy functions – and no longer hindering the agility of a mobile and cloud-driven enterprise. This shift is not just a technical necessity but also a regulatory and compliance imperative. With government bodies mandating zero-trust models and new SEC regulations requiring breach reporting, warning shots have been fired. Cybersecurity is no longer just an IT issue; it has elevated to a boardroom priority, with far-reaching implications for business continuity and reputation. Many access control solutions have claimed to adopt zero-trust by adding dynamic trust. 


Indian construction industry leads digital transformation in Asia pacific

“While challenges like the increasing prices of raw materials and growing competition persist in the Indian market, its current strong economic state and steady outlook for the forthcoming years, as reported by the IMF, have provided a congenial atmosphere for businesses to evaluate and adopt newer technologies, and consequently lead the Asia Pacific market in terms of investments in transformational technologies. Indian businesses have aptly recognised this phase as the ideal time to leverage digital technologies to identify newer growth pockets, usher in efficiencies throughout project lifecycles and give them a competitive edge,” said Sumit Oberoi, Senior Industry Strategist, Asia Pacific at Autodesk. “Priority areas for construction businesses to improve digital adoption include starting small, selecting a digital champion, tracking a range of success measures, and asking whether your business is AI ready.” he added. ... David Rumbens, Partner at Deloitte Access Economics, said, “The Indian construction sector, fuelled by a surge in demand for affordable housing as well as supportive government policies to boost urban infrastructure, is poised to make a strong contribution as India’s economy grows by 6.9% over the next year


Recovering from CrowdStrike, Prepping for the Next Incident

In the future, organizations could consider whether outside factors make a potential software acquisition riskier, Sayers said. A product widely used by Fortune 100 companies, for example, has the added risk of being an attractive target to attackers hoping to hit many such victims in a single attack. “There is a soft underbelly in the global IT world, where you can have instances where a particular piece of software or a particular vendor is so heavily relied upon that they themselves could potentially become a target in the future,” Sayers said. Organizations also need to identify any single points of failure in their environments — instances where they rely on an IT solution whose disruption, whether deliberate or accidental, could disrupt their whole organization. When one is identified, they need to begin planning around the risks and looking for backup processes. Sayers noted that some types of resiliency measures may be too expensive for most organizations to adopt; some entities are already priced out of just backing up all their data and many would be unable to afford maintaining backup, alternate IT infrastructure to which they could roll over.


AI And Security: It Is Complicated But Doesn't Need To Be

While AI may present a potential risk for companies, it could also be part of the solution. As AI processes information differently from humans, it can look at issues differently and come up with breakthrough solutions. For example, AI produces better algorithms and can solve mathematical problems that humans have struggled with for many years. As such, when it comes to information security, algorithms are king and AI, Machine Learning (ML) or a similar cognitive computing technology, could come up with a way to secure data. This is a real benefit of AI as it can not only identify and sort massive amounts of information, but it can identify patterns allowing organisations to see things that they never noticed before. This brings a whole new element to information security. ... As these solutions will bring benefits to the workplace, companies may consider putting non-sensitive data into systems to limit exposure of internal data sets while driving efficiency across the organisation. However, organisations need to realise that they can’t have it both ways, and data they put into such systems will not remain private.



Quote for the day:

“When we give ourselves permission to fail, we, at the same time, give ourselves permission to excel.” -- Eloise Ristad

Daily Tech Digest - July 30, 2024

Cyber security and Compliance: The Convergence of Regtech Solutions

While cybersecurity, in itself, is an area that requires significant resources to ensure compliance, a business organisation needs to deal with numerous other regulations. The business regulatory ecosystem is made up of over 1,500 acts and rules and more than 69,000 compliances. As such, each enterprise needs to figure out the regulatory requirements applicable to their business. The complexity of the compliance framework is such that businesses are often lagging behind their compliance timelines. Take, for instance, a single-entity MSME with a single-state operation involved in manufacturing automotive components. Even such an operation requires the employer to keep up with 624 unique compliances. These requirements can reach close to 1,000 for a pharmaceutical enterprise. Persisting with manual compliance methods while technology has taken over every other business operation has become the root cause of delays, lapses, and defaults. While businesses are investing in the best possible technological solutions for cybersecurity issues, they are disregarding the impact of technology on their compliance functions.


Millions of Websites Susceptible to XSS Attack via OAuth Implementation Flaw

Essentially, the ‘attack’ requires only a crafted link to Google (mimicking a HotJar social login attempt but requesting a ‘code token’ rather than simple ‘code’ response to prevent HotJar consuming the once-only code); and a social engineering method to persuade the victim to click the link and start the attack (with the code being delivered to the attacker). This is the basis of the attack: a false link (but it’s one that appears legitimate), persuading the victim to click the link, and receipt of an actionable log-in code. “Once the attacker has a victim’s code, they can start a new login flow in HotJar but replace their code with the victim code – leading to a full account takeover,” reports Salt Labs. The vulnerability is not in OAuth, but in the way in which OAuth is implemented by many websites. Fully secure implementation requires extra effort that most websites simply don’t realize and enact, or simply don’t have the in-house skills to do so. From its own investigations, Salt Labs believes that there are likely millions of vulnerable websites around the world. The scale is too great for the firm to investigate and notify everyone individually. 


How to Build a High-Performance Analytics Team

The first approach, which he called the “artisan model,” involves building a small team of highly experienced (and highly paid) data scientists. Such skilled and capable team members can generally tackle all aspects of solving a business problem, from subject matter expert engagement to hypothesis testing, production, and iteration. The “factory approach,” on the other hand, resembles more of an assembly line, with a large group of people divvying up tasks based on their areas of expertise: some working on the business problem definition, others handling data acquisition, and so on. This second approach requires hiring more people than the first approach, but the pay differential between the two types of team members is significant enough that the two approaches cost roughly the same. ... An analytics team needs to grow and evolve to survive, and management must treat its staff accordingly. “Data scientists are some of the most sought-after talent in the economy right now,” Thompson stressed, “So I’m working every day to make sure that my team is happy and that they’re getting work they’re interested in ­– that they’re being paid well and treated well.”


Securing remote access to mission-critical OT assets

The two biggest challenges around securing remote access to mission-critical OT assets are different depending on whether it’s a user or machine that needs to connect to the OT asset. In terms of user access, the fundamental challenge is that the cyber security team doesn’t know what the assets are, and who the users are. That’s where the knowledge of the OT engineers – coupled with an inventory of the assets comes into play. The security team can leverage the inventory, experience, and knowledge of the OT engineers to operate as the “first line of defense” to stand up the organizational defenses. With respect to machine-to-machines access organizations typically don’t have an understanding of what “known good” traffic should look like between these assets. Without this understanding knowledge, it’s impossible to spot the anomalies from the baseline. That’s where a good cyber-physical system protection platform comes into play, providing the ability to understand the typical communication patterns that can eventually be operationalized in network segmentation rules to ensure effective security.


CrowdStrike debacle underscores importance of having a plan

To CrowdStrike’s credit, as well as its many partners and the CISO/InfoSec community at large, a lot of oil was burned in the initial days after the faulty update was transmitted as the community collectively jumped in and lent a hand to mitigate the situation. ... “Moving forward, this outage demonstrates that continuous preparation to fortify defenses is vital, especially before outages occur,” Christine Gadsby, CISO at Blackberry, opined. She continued, “Already understanding what areas are most vulnerable within a system prevents a panicked reaction when something looks amiss and makes it more difficult for hackers to wreak havoc. In a crisis, defense is the best offense; the value of confidence that comes with preparation cannot be underestimated.” ... CISOs should also review what needs to be changed, included, or deleted from their emergency response and business continuity playbooks. ... Now is the time for each CISO to do a bit of introspection on their team’s ability to address a similar scenario, and plan, exercise, and be prepared for the unexpected. Which could happen today, tomorrow, or hopefully never.


How Searchable Encryption Changes the Data Security Game

Organizations know they must encrypt their most valuable, sensitive data to prevent data theft and breaches. They also understand that organizational data exists to be used. To be searched, viewed, and modified to keep businesses running. Unfortunately, our Network and Data Security Engineers were taught for decades that you just can't search or edit data while in an encrypted state. ... So why, now, is Searchable Encryption suddenly becoming a gold standard in critical private, sensitive, and controlled data security? According to Gartner, "The need to protect data confidentiality and maintain data utility is a top concern for data analytics and privacy teams working with large amounts of data. The ability to encrypt data, and still process it securely is considered the holy grail of data protection." Previously, the possibility of data-in-use encryption revolved around the promise of Homomorphic Encryption (HE), which has notoriously slow performance, is really expensive, and requires an obscene amount of processing power. However, with the use of Searchable Symmetric Encryption technology, we can process "data in use" while it remains encrypted and maintain near real-time, millisecond query performance.


How Cloud-Based Solutions Help Farmers Improve Every Season

At the start of each growing season, farmers can use previous years’ data to strategically plan where and when to plant seeds, identifying the areas of the field where plants often grow strongly or are typically not as prosperous. From there, planters equipped with robotics, sensors, and camera vision, augmented with field boundaries, guidance lines, and other data provided from the cloud, can precisely place hundreds of seeds per second at an optimal depth and with optimal spacing, avoiding losses from seeds being planted too shallow, deep, or close to another plant. ... Advanced machines gather a wide range of data to support the next step of nurturing plant growth. That data is critical, because while plants are growing, so are weeds. And weeds need to be treated in a timely manner to give crops the best possible conditions to grow. With access to the prior year’s data, farmers can anticipate where weeds are likely to grow and target them directly. Today’s sprayers use computer vision and machine learning to detect where weeds are located as the sprayer moves throughout a field, applying herbicide only where it is needed. This not only reduces costs but is also more sustainable.


Thinking Like an Architect

The world we're in is not simple. The applications we build today are complex because they are based on distributed systems, event-driven architectures, asynchronous processing, or scale-out and auto-scaling capabilities. While these are impressive capabilities, they add complexity. Models are an architect’s best tool to tackle complexity. Models are powerful because they shape how people think. Dave Farley illustrated this with an example: long ago, people believed the Earth was at the center of the universe and this belief made the planets' movements seem erratic and complicated. The real problem wasn't the planets' movements but using an incorrect model. When you place the sun at the center of the solar system, everything makes sense. Architects explaining things to others who operate differently may believe that others don't understand when they simply use a different mental model. ... Architects can make everyone else a bit smarter by seeing multiple dimensions. By expanding the problem and solution space, architects enable others to approach problems more intelligently. Often, disagreements arise when two parties view a problem from different angles, akin to debating between a square and a triangle without progress.


CrowdStrike Outage Could Cost Cyber Insurers $1.5 Billion

Most claims will center on losses due to "business interruption, which is a primary contributor to losses from cyber incidents," it said. "Because these losses were not caused by a cyberattack, claims will be made under 'systems failure' coverage, which is becoming standard coverage within cyber insurance policies." But, not all systems-failure coverage will apply to this incident, it said, since some policies exclude nonmalicious events or have to reach a certain threshold of losses before being triggered. The outage resembled a supply chain attack, since it took out multiple users of the same technology all at once - including airlines, doctors' practices, hospitals, banks, stock exchanges and more. Cyber insurance experts said the timing of the outage will also help mitigate the quantity of claims insurers are likely to see. At the moment CrowdStrike sent its update gone wrong, "more Asia-Pacific systems were online than European and U.S. systems, but Europe and the U.S. have a greater share of cyber insurance coverage than does the Asia-Pacific region," Moody's Reports said. The outage, dubbed "CrowdOut" by CyberCube, led to 8.5 million Windows hosts crashing to a Windows "blue screen of death" and then getting stuck in a constant loop of rebooting and crashing.


Open-source AI narrows gap with proprietary leaders, new benchmark reveals

As the AI arms race intensifies, with new models being released almost weekly, Galileo’s index offers a snapshot of an industry in flux. The company plans to update the benchmark quarterly, providing ongoing insight into the shifting balance between open-source and proprietary AI technologies. Looking ahead, Chatterji anticipates further developments in the field. “We’re starting to see large models that are like operating systems for this very powerful reasoning,” he said. “And it’s going to become more and more generalizable over the course of the next maybe one to two years, as well as see the context lengths that they can support, especially on the open source side, will start increasing a lot more. Cost is going to go down quite a lot, just the laws of physics are going to kick in.” He also predicts a rise in multimodal models and agent-based systems, which will require new evaluation frameworks and likely spur another round of innovation in the AI industry. As businesses grapple with the rapid pace of AI advancement, tools like Galileo’s Hallucination Index will likely play an increasingly crucial role in informing decision-making and strategy. 



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - July 29, 2024

Addressing the conundrum of imposter syndrome and LLMs

LLMs, trained on extensive datasets, excel at delivering precise and accurate information across a broad spectrum of topics. The advent of LLMs has undoubtedly been a significant advancement, offering a superior alternative to traditional web browsing and the often tedious process of sifting through multiple sites with incomplete information. This innovation significantly reduces the time required to resolve queries, find answers and move on to subsequent tasks. Furthermore, LLMs serve as excellent sources of inspiration for new, creative projects. Their ability to provide detailed, well-rounded responses makes them invaluable for a variety of tasks, from writing resumes and planning trips to summarizing books and creating digital content. This capability has notably decreased the time needed to iterate on ideas and produce polished outputs. However, this convenience is not without its potential risks. The remarkable capabilities of LLMs can lead to over-reliance, in which we depend on them for even the smallest tasks, such as debugging or writing code, without fully processing the information ourselves.


Enhancing threat detection for GenAI workloads with cloud attack emulation

Detecting threats in GenAI cloud workloads should be a significant concern for most organizations. Although this topic is not heavily discussed, it is a ticking time bomb that might explode only when attacks emerge or if compliance regulations enforce threat detection requirements for GenAI workloads. ... Automatic inventory systems are required to track organizations’ GenAI workloads. This is a critical requirement for threat detection, the basis for security visibility. However, this might be challenging in organizations where security teams are unaware of GenAI adoption. Similarly, only some technical tools can discover and maintain an inventory of GenAI cloud workloads. ... Most cloud threats are not actual vulnerabilities but abuses of existing features, making the detection of malicious behavior challenging. This is also a challenge for rule-based systems since they are not always able to identify intelligently when API calls or log events indicate malicious events. Therefore, event correlation is leveraged to formulate possible events indicating attacks. GenAI has several abuse cases, e.g., prompt injections and training data poisoning. 


Thriving in the AI Era: A 7-Step Playbook For CEOs

Integrating AI into the workplace requires a fundamental shift in how businesses approach employee education and skill development. Leaders must now prioritize lifelong learning and reskilling initiatives to ensure their workforce remains competitive in an AI-driven market. This involves not only technical training but also fostering a culture of continuous learning. By investing in upskilling programs, businesses can equip employees with the proper knowledge and capabilities to work alongside AI technologies. ... The potential risks associated with AI, such as biases, data breaches and misinformation, underscore the urgent need for ethical AI practices. Business leaders must establish robust governance frameworks to ensure that AI technologies are developed and deployed responsibly. This includes implementing standards for fairness, accountability, and transparency in AI systems. ... Maximizing human potential requires creating work environments that facilitate “flow states,” where individuals are fully immersed and engaged in their tasks. Psychologist Mihaly Csikszentmihalyi’s concept of flow theory highlights the importance of focused, distraction-free work periods for enhancing performance.


Benefits and Risks of Deploying LLMs as Part of Security Processes

Advanced LLMs hold tremendous promise to reduce the workload of cybersecurity teams and to improve their capabilities. AI-powered coding tools have widely penetrated software development. Github research found that 92% of developers are using or have used AI tools for code suggestion and completion. Most of these “copilot” tools have some security capabilities. Programmatic disciplines with relatively binary outcomes such as coding (code will either pass or fail unit tests) are well suited for LLMs. ... As a new technology with a short track record, LLMs have serious risks. Worse, understanding the full extent of those risks is challenging because LLM outputs are not 100% predictable or programmatic. ... As AI systems become more capable, their information security deployments are expanding rapidly. To be clear, many cybersecurity companies have long used pattern matching and machine learning for dynamic filtering. What is new in the generative AI era are interactive LLMs that provide a layer of intelligence atop existing workflows and pools of data, ideally improving the efficiency and enhancing capabilities of cybersecurity teams. 


NIST releases new tool to check AI models’ security

The guidelines outline voluntary practices developers can adopt while designing and building their model to protect it against being misused to cause deliberate harm to individuals, public safety, and national security. The draft offers seven key approaches for mitigating the risks that models will be misused, along with recommendations on how to implement them and how to be transparent about their implementation. “Together, these practices can help prevent models from enabling harm through activities like developing biological weapons, carrying out offensive cyber operations, and generating child sexual abuse material and nonconsensual intimate imagery,” the NIST said, adding that it was accepting comments on the draft till September 9. ... While the SSDF is broadly concerned with software coding practices, the companion resource expands the SSDF partly to address the issue of a model being compromised with malicious training data that adversely affects the AI system’s performance, it added. As part of the NIST’s plan to ensure AI safety, it has further proposed a separate plan for US stakeholders to work with others around the globe on developing AI standards.


Data Privacy Compliance Is an Opportunity, Not a Burden

Often, businesses face challenges in ensuring that the consent categories set by their consent management platforms (CMPs) are accurately reflected in their data collection processes. This misalignment can result in user event data inappropriately entering downstream tools. With advanced consent enforcement, customers can now effortlessly synchronize their consent categories with their data collection and routing strategies, eliminating the risk of sending user event data where it shouldn’t be. This establishes a robust connection between the CMP and the data collection engine, ensuring that they consistently align and preventing any unintended data leaks or misconfigurations. Moreover, leaders should consider minimizing the data they collect by ensuring it genuinely advances re-targeting efforts. ... Customers are more interested in protecting their data – and more pessimistic about data privacy – than ever. Organizations can capitalize on this sentiment by becoming robust data stewards. Embracing data privacy as an opportunity rather than a burden can lead to improved outcomes, stronger customer relationships, and a competitive advantage in the market. 


The impact of AI on mitigating risks in hiring processes: Combating employee fraud

There are different ways through which AI is transforming the entire hiring process and eliminating fraud. But to begin with, we must comprehend the many forms that candidate fraud manifests. It may take place in multiple ways, such as plain lying on resumes, falsifying credentials, or even identity theft. These may consist of intentional misrepresentations or omissions, such as when an applicant doesn’t disclose his/her history of being involved in a crime. Because of this, companies may suffer significant financial losses, sharp declines in production, or even legal problems as a result. In this case, artificial intelligence can help. ... AI is also capable of probing applicant behaviour throughout the recruiting process. Through the utilisation of facial recognition technology, machine learning algorithms can evaluate interview responses and communication styles. These systems can identify subtle facial expressions to identify indicators of deceit or uneasiness. Additionally, voice analysis can be used to spot odd shifts in speech patterns and tonality, providing important details about a candidate’s authenticity.


Balancing Technology with Personal Touch: The Evolution of Digital Lending

The best way to get someone on your side is to invite them into the battle. We brought in some of our retail partners to provide feedback on how the application looks and feels from their perspective. We also involved loan officers who are part of the application intake experience. They were able to provide quick, immediate feedback on the spot and we were able to make changes based on their input. By involving employees in the process, they felt like their voice was heard and they had a seat at the table. ... This approach to employee engagement in digital transformation aligns with broader trends in change management and organizational psychology. Companies across industries are recognizing that successful digital transformations require not just technological upgrades, but also cultural shifts and employee buy-in. ... As financial institutions continue to navigate the digital transformation of lending processes, the key to success lies in balancing technological innovation with a deep understanding of customer needs and a commitment to employee engagement. By embracing change while maintaining a focus on personalized service, banks like Broadway Bank are well-positioned to thrive in the evolving landscape of digital lending.


The True Cost of a Major Network or Application Failure

When critical communication and collaboration tools falter, the consequences extend far beyond immediate revenue loss. Employees experience downtime, productivity declines, and customers may face disruptions in service, leading to dissatisfaction and potential churn. The negative publicity surrounding major outages can further damage a company's brand reputation, eroding stakeholder trust. ... Common issues like dropped calls, delays in joining meetings, and poor audio/video quality issues affecting only a handful of users may seem minor when viewed individually, but their collective toll can be significant. These issues strain IT resources, create a backlog of tickets, and decrease employee morale and job satisfaction. ... To address the challenges posed by network and application failures, it’s clear organizations must be more proactive in setting up monitoring and incident response strategies. After all, receiving real-time insights into the health and performance of UCaaS and SaaS platforms more generally can enable IT teams to identify and address issues before they escalate. Further, implementing robust incident management protocols and conducting regular performance assessments are crucial to minimizing downtime and maximizing operational efficiency.


Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025

A major challenge for organizations arises in justifying the substantial investment in GenAI for productivity enhancement, which can be difficult to directly translate into financial benefit, according to Gartner. ... “Unfortunately, there is no one size fits all with GenAI, and costs aren’t as predictable as other technologies,” said Sallam. “What you spend, the use cases you invest in and the deployment approaches you take, all determine the costs. Whether you’re a market disruptor and want to infuse AI everywhere, or you have a more conservative focus on productivity gains or extending existing processes, each has different levels of cost, risk, variability and strategic impact.” ... By analyzing the business value and the total costs of GenAI business model innovation, organizations can establish the direct ROI and future value impact, according to Gartner. This serves as a crucial tool for making informed investment decisions about GenAI business model innovation. If the business outcomes meet or exceed expectations, it presents an opportunity to expand investments by scaling GenAI innovation and usage across a broader user base, or implementing it in additional business divisions,” said Sallam.



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree

Daily Tech Digest - July 28, 2024

India's tech revolution fuels the rise of the managed services industry

Companies, eager to leverage cloud computing, AI, and improved solutions, are seeking reliable partners to manage the complexity. With a massive talent pool of STEM graduates, dynamic IT infrastructure, and supportive policies, the world is looking towards India to serve as a primary player in this space. India's arduous journey to becoming a global tech giant is the result of decades of investment of time, money, and energy into meticulously planning and taking continuous strides. Companies from around the world are shifting significant parts of their IT and business spend to India to drive cost optimisation. ... The pandemic acted as a catalyst for a revolution in managed IT services. Agility, resilience, and enhanced cybersecurity became critical overnight and the rise in remote work led to a surge in demand for managed services. Cybersecurity concerns also skyrocketed both during and after the pandemic and innovation in data handling was imperative. Consequently, many companies are pivoting towards managed service providers to strengthen their computing power, data analysis, and cybersecurity measures.


5 Innovatinve Cybersecurity Measures App Developers Should Incorporate in the Digital Transformation Race

The impending era of quantum computing will give the expected boost to digital transformation; however, this technological innovation to classic computing poses a significant challenge to traditional encryption methods due to its exceptional computing power that hackers can leverage to launch unprecedented brute force attacks that can decrypt passwords and crack encryptions in seconds or minutes. App developers must integrate post-quantum cryptographic features to withstand the computational power of quantum computers. ... By incorporating ZTNA and multifactor authentication (MFA), app developers can proactively prevent data breaches by thoroughly verifying the trustworthiness of any user or device trying to access the organization's networks. The multifactor authentication feature adds a layer of security to VPN access by requiring multiple verification forms; users’ verification methods can include passwords, unique OTP codes sent to mobile devices, or biometric authentication, such as fingerprint, eye scan, voice recognition, hand geometry, or facial recognition, before granting network access.


How to Use Self-Healing Code to Reduce Technical Debt

The idea of self-healing code with LLMs is exciting, but balancing automation and human oversight is still crucial. Manual reviews are necessary to ensure AI solutions are accurate and meet project goals, with self-healing code drastically reducing manual efforts. Good data housekeeping is vital, and so is ensuring that teams are familiar with best practices to ensure optimal data management for feeding AI technology, including LLMs and other algorithms. This is particularly important for cross-department data sharing, with best practices including conducting assessments, consolidations, and data governance and integration plans to improve projects. None of this could take place without enabling continuous learning across your staff. To encourage teams to use these opportunities, leaders should carve out dedicated time for training workshops that offer direct access to the latest tools. These training sessions could be oriented around certifications like those from Amazon Web Services (AWS), which can significantly incentivize employees to enhance their skills. By doing this, a more efficient and innovative software development environment can be achieved.


Chaos Management in Software: A Guide on How to Conduct it

When too much development occurs too soon, this is one symptom that chaos may be present in an organization. Growth is usually beneficial, but not when it causes chaos and confusion. Companies also exhibit indications of disorder when they overstretch their operational capacity or resources, such as money or people, creating an unstable atmosphere for both employees and consumers. ... In the work environment, we can see how it can negatively and positively affect our output. It is essential to be in a healthy work environment that offers employees the opportunity to succeed and be rewarded for their achievements. The problem with chaos is that it can cause an unhealthy work environment, negatively affecting the worker’s productivity, quality of work, and physical health. Chaos in the workplace also impacts team building because when people are in a chaotic space, they cannot focus on anything other than how they feel at that moment. We have all been there – that one time when we did not get enough sleep, or the project was due tomorrow morning, or we needed to wake up early to get that presentation done before your meeting started. 


CIOs must reassess cloud concentration risk post-Crowdstrike

Cloud concentration risk is now arising when these enterprises rely worryingly on a single cloud service provider (CSP) for all their critical business needs. In effect this has shifted reliance on their own data center to now storing all data, running all applications on a single cloud infrastructure. Cloud concentration risk is then fully realized when any one incident, like the CrowdStrike outage, can disrupt your entire operation. With enterprises increasingly dependent on the same applications and cloud providers, this can be devastating at scale, as we’ve seen with CrowdStrike. Such a scenario extends to security breaches and other events that can have more systemic impact on countries and industries. ... Toavoid the dangers of cloud concentration risk, a multi-cloud strategy,in which business workloads are spread across multiple cloud providers, is vital. With a multi-cloud strategy in place, when one provider has an issue, your operations in the other clouds can keep things running. The alternate is to adopt a hybrid cloudapproach,combiningprivate and public cloud. This gives you more control over proprietary and sensitive data whilst still having all the benefits of public cloud scalability.


With ‘Digital Twins,’ The Doctor Will See You Now

Doctors who use the system can not only measure the usual stuff, like pulse and blood pressure, but also spy on the blood’s behavior inside the vessel. This lets them observe swirls in the bloodstream called vortices and the stresses felt by vessel walls — both of which are linked to heart disease. ... We drew a lot from the way they were already optimizing graphics for these computers: The 3D mesh file that we create of the arteries is really similar to what they make for animated characters. The way you move a character’s arm and deform that mesh is the same way you would put in a virtual stent. And the predictions are not just a single number that you want to get back. There’s a quantity called “wall shear stress,” which is just a frictional force on the wall. We’ve shown that when doctors can visualize that wall shear stress at different parts of the artery, they may actually change the length of the stent that they choose. It really informs their decisions. We’ve also shown that, in borderline cases, vorticity is associated with long-term adverse effects. So doctors can see where there’s high vorticity. It could help doctors decide what type of intervention is needed, like a stent or a drug.


What Are the Five Pillars of Data Resilience?

The first is the most basic one: Do you have data backed up in the right way? That seems very straightforward, but you’d be shocked by how many companies don’t have the right backup strategy in place. And that’s vital because our research tells us that 93% of ransomware attackers go for the backups first. ... So, the second pillar is, can you recover quickly from a breach? What’s your recovery strategy, and can you get to your recovery time objective and recovery point objective? Third is data freedom, which is not often talked about. There are many instances where you’ll just need to change your tech stack. You may see a better tech solution, or companies may just change their posture. No matter what choice you make, you need your data to travel with you with minimal fuss. ecurity is fourth. Do you have the right malware protection? Are you able to detect changing patterns, even of your own employees to mitigate insider threats? And there’s obviously table stakes, like multifactor authentication, end-to-end security, etc. And then the last pillar we look at is data intelligence.


Navigating the Future with Cloud-Ready, Customer-Centric Innovations

One of CFOS’s most transformative aspects is its cloud-based infrastructure. SCC realised that the traditional on-premises servers were becoming a bottleneck, limiting scalability and flexibility. By moving to the cloud, SCC gained the ability to dynamically scale resources according to demand, reducing upfront costs and minimising maintenance challenges. This shift optimised resource utilisation and provided a more agile platform for future growth and technological advancements. “Transitioning from on-premises servers to a cloud solution significantly enhanced SCC’s operational strategy,” Lee revealed. “Previously, managing and scaling physical servers posed challenges, particularly in cost and availability of relevant skill-sets, solutions and resources.” The cloud integration resolved these challenges by enabling SCC to scale resources as needed. This approach enhanced cost efficiency and allowed the organisation to quickly adapt to changing demands. By transitioning to the cloud, SCC was able to manage resources dynamically, accommodating peak loads and supporting future growth without the limitations of physical infrastructure.


The Ultimate Roadmap to Modernizing Legacy Applications

First, organizations should conduct an assessment of their application portfolios to determine which apps are eligible for modernization, whether that be containerization, cloud migration, refactoring or another route. This can help government IT leaders prioritize which apps to upgrade. It also gives teams a comprehensive picture of the entire application portfolio: performance, health, average age, security gaps, container construction and more. “Having an inventory of all of your applications can help you avoid duplicative investments and paint a clearer picture of how that application fits into your organization’s long-term strategy,” says Greg Peters, founder of strategic application modernization assessment (SAMA) at CDW. ... The next critical step is to map dependencies before beginning the actual modernization. “Even a minor change to the functionality of a core system can have major downstream effects, and failing to account for any dependencies on legacy apps slated for modernization can lead to system outages and business interruptions,” Hitachi Solutions notes.


Fully Homomorphic Encryption (FHE) with silicon photonics – the future of secure computing

FHE requires specialist hardware and considerable amounts of processing power, leading to high energy consumption and increased costs. However, FHE enabled by silicon photonics — using light to transmit data — offers a solution that could make FHE more scalable and efficient. Current electronic hardware solutions systems are reaching their limits, struggling to handle the large volumes of data and meet the demands of FHE. However, silicon photonics can significantly enhance data processing speed and efficiency, reduce energy consumption and lead to large-scale implementation of FHE. This can unlock numerous possibilities for data privacy across various sectors, including healthcare, finance and government, in areas such as AI, data collaboration and blockchain. This could potentially lead to significant progress in medical research, fraud detection and enable large scale collaboration across industries and geographies. ... FHE is set to transform the future of secure computing and data security. By enabling computations on encrypted data, FHE offers new levels of protection for sensitive information, addressing critical challenges in privacy, cloud security, regulatory compliance, and data sharing. 



Quote for the day:

“The road to success and the road to failure are almost exactly the same.” -- Colin R. Davis

Daily Tech Digest - July 27, 2024

Google DeepMind takes step closer to cracking top-level maths

Unlike a human mathematician, the systems were either flawless or hopeless. In each of the questions they solved, they scored perfect marks, but for two out of the six questions, they were unable to even begin working towards an answer. Moreover, DeepMind, unlike humancompetitors, was given no time limit. While students get nine hours to tackle the problems, the DeepMind systems took three days working round the clock to solve one question, despite blitzing another in seconds. ... “What we try to do is to build a bridge between these two spheres,” said Thomas Hubert, the lead on AlphaProof, “so that we can take advantage of the guarantees that come with formal mathematics and the data that is available in informal mathematics.” After it was trained on a vast number of maths problems written in English, AlphaProof used its knowledge to try to generate specific proofs in the formal language. Because those proofs can be verifiably true or not, it is possible to teach the system to improve itself. The approach can solve difficult problems, but isn’t always fast at doing so: while it is far better than simple trial and error, it took three days to find the correct formal model for one of the hardest questions in the challenge.


Modern Leaders: Steps To Guide An Organization Through Rapid Growth

With the culture and compass set, leaders need to hold each other accountable for acting according to established norms. Performative behavior is rampant because we often turn a blind eye to issues. Passive bullying occurs when we don't stand up for someone because it's easier to stay uninvolved. Leaders must be willing to put their necks on the line for each other to build real trust. People should feel free to come and go. Create systems where they feel comfortable, feel accepted and can be seen and heard. Leaders must understand that they can't force these connections but must genuinely care about their employees. Performative leadership will fail, as people value authenticity over money and power today. It is an exchange if you want them to care about you. ... “Don’t think in silos!” C-suites may say. Program managers, change agents, integration managers, transformational offices, OKR champions and DEI leaders don’t see silos. Neither does a chief of staff and many HR leaders. We understand nurturing teams of people and goals from an unbiased perspective is effective for everyone. Unfortunately, these roles often have no authority or support, and they face a lot of adversity.


Nvidia Embraces LLMs & Commonsense Cybersecurity Strategy

One thing that sets AI environments apart from their more traditional IT counterparts is their ability for autonomous agency. Companies do not just want AI applications that can automate the creation of content or analyze data, they want models that can take action. As such, those so-called agentic AI systems do pose even greater potential risks. If an attacker can cause an LLM to do something unexpected, and the AI systems has the ability to take action in another application, the results can be dramatic, Harang says. "We've seen, even recently, examples in other systems of how tool use can sometimes lead to unexpected activity from the LLM or unexpected information disclosure," he says, adding: "As we develop increasing capabilities — including tool use — I think it's still going to be an ongoing learning process for the industry." Harang notes that even with the greater risk, it's important to realize that it's a solvable issue. He himself avoids the "sky is falling" hyperbole around the risk of GenAI use, and often taps it to hunt down specific information, such as the grammar of a specific programming function, and to summarize academic papers.


AI tutors could be coming to the classroom – but who taught the tutor, and should you trust them?

If AI systems are trained on biased data or without considering diverse perspectives, there is a high likelihood decisions being made based on these systems will favour one group over others, reinforce stereotypes, and ignore or undervalue different ways of living and thinking. The concern isn’t just about the influence AI can have on us but also how AI consumes and processes data.  ... If done well, a walled garden approach might provide a comprehensive, inclusive, culturally sustaining pathway to better learning. However, given the challenges of such an undertaking (never mind the expense), the chances of success in practice are extremely small. Meanwhile, we can’t just wait for AI tutors. AI is a reality in schools, and we need to prepare students for what they face now and in the future. Specific tools are important, but our focus should be on developing AI literacy across the educational sector. This is why we are researching what it means to be AI literate and how this can empower critical evaluation and ethical use, ensuring AI complements rather than replaces human teaching.


The case for multicloud: Lessons from the CrowdStrike outage

Multicloud isn’t a magical cure for all that could go wrong. It’s an architectural option with good and bad aspects. The recent outage or something like it will likely drive many enterprises to multicloud for the wrong reasons. We saw this during the pandemic when many new customers rushed to public cloud providers for the wrong reasons. Today, we’re still fixing the fallout from those impetuous decisions. Enterprises should thoroughly assess their workloads and identify their critical applications before they implement a multicloud strategy. Selecting appropriate cloud providers based on their strengths and services is essential. You must manage multiclouds as a collection of integrated systems. Today, many enterprises view multicloud as a collection of silos. The silo approach will fail, either right away or eventually. Treating multicloud as a collection of systems will require more attention to detail, which translates into more upfront time and money to get the plan right the first time. It’s still the best route because doing it the second time is usually twice as expensive.


The Next Phase of Automation: Learning From the Past and Looking to the Future

There’s a tendency for workers to catastrophize AI tools in fear that it will eliminate their jobs. But this is the wrong way to look at it. AI won’t replace you—it’s a tool that should be leveraged for its capacity to increase your value in the workspace. Learn to harness the capabilities of AI automation to remain competitive as the market evolves, otherwise you risk becoming obsolete. ... AI will get better over time, but you need to be realistic about its current capabilities. Simple task automation doesn’t require complicated backend adapters. Find ways for this to aid you in your daily tasks to automate simple, repetitive tasks. Stay on top of changes as this evolves over time to automate more complex processes. ... Your automation strategy should avoid focusing solely on AI tools. There are plenty of automated tools that don’t use AI to perform very useful functions. The tools you source today will set you up for the future, so it’s important to find a full suite of automation tools that can handle the majority of your automation needs. Utilizing a singular vendor ensures these tools work together seamlessly and avoids coverage gaps. 


Combating Shadow AI: Implementing Controls for Government Use of AI

Governance must start from a policy or mission perspective rather than a technology perspective. Understanding the role of AI in government programs from both a benefit and risk perspective takes intentionality by leadership to appoint focused teams that can evaluate the potential intersections of AI platforms and the mission of the agency. The increase of public engagement through technology creates an accessible rich set of data that AI platforms can use to train their models. Organizations may choose a more conservative approach by blocking all AI crawlers until the impact of allowing those interactions is understood. For those entities that see benefit for legitimate crawling of public properties, the ability to allow legitimate and controlled access by verified AI crawlers while protecting against the bad is critical in today’s environment. From within the organization, establishing what roles and tasks require access to AI platforms is a critical early step in getting ahead of increased regulations.


The AI blueprint: Engineering a new era of compliance in digital finance

The dynamism of the regulatory environment, along with RBI’s robust oversight underscores the necessity for constant adaptation within the industry. As regulatory compliance requirements continue to evolve, organisations are required to maintain high standards and avoid legal risks. The advent of AI in regulation and compliance provides numerous use cases. ... In the ever-evolving legal and compliance landscape, staying ahead of the curve can potentially save businesses hefty penalties and lawsuits. With NLP capabilities, AI-driven solutions are instrumental in monitoring and predicting changes in the regulatory system. ... AI has the power to streamline compliance-based screenings with accuracy and efficiency. Banks and financial institutions receive multiple alerts and notifications; it becomes a tedious task to filter through them all. AI helps in assessing alerts, identifying patterns, and providing solutions for further action. It assists in swiftly screening customer profiles for fraudulent patterns and anomalies, which is crucial for the mandatory KYC process.


Can AI Truly Transform the Developer Experience?

Coding assistants and the other developer-focused AI capabilities I mentioned will likely enhance rather than improve DevEx. To be clear, developers should be given access to these tools; they want to use them, find them helpful, and, most importantly, expect access to them. However, these tools are unlikely to resolve the existing DevEx issues within your organization. Improving DevEx starts with asking your developers what needs improvement. Once you have this list (which is likely to be extended), you can identify the best way to solve these challenges, which may include using AI. ... Using AI to remove the developer toil associated with resolving tech debt improves one of the common challenges to a good DevEx. It allows developers to commit more time to tasks like releasing new features. We’ve had great feedback on AutoFix internally, and we’re working on making this available to customers later this year. The currently available AI capabilities for developers are pretty impressive. Developers expect access to these tools to assist them with their daily tasks, which is a good enough reason to provide access to them. 


The secret to CIO success? Unleash your inner CEO

It is now understood to a far greater degree than in the past that the CIO must align technologies with business goals to achieve maximum outcomes, but that does not mean the CIO has to do it all — or take a CEO-like role. Every C-suite role is evolving thanks to dramatic technological change. Moreover, the C-suite continues to expand, with IT leaders increasingly stepping in to take on these new leadership roles. Still, there is no doubt the CIO role has evolved in prominence, prestige, and power to be an agent of change. “Saying the CIO will replace the CEO is a stretch, but CIOs being viable candidates as successors for the top spot is real, as business and technology strategy converge,” notes Ashley Skyrme, senior managing director at Accenture, who views the change more as one that will transform the qualities looked for when hiring the next generation of CEOs. “What is the new CEO profile in an era of genAI and data-led business models? How does this change CEO succession planning and who you select as your CIO?” she asks. The answers to those questions will determine “what CEOs need to learn and what CIOs need to learn” to succeed in the future, Skyrme says.



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein

Daily Tech Digest - July 25, 2024

7 LLM Risks and API Management Strategies to Keep Data Safe

Overloading an LLM with requests can cause poor service or increased resource costs, two of the worst outcomes for an organization. Yet with a model denial of service that is what’s at stake. This happens when attackers cause resource-heavy operations on LLMs. This could look like a higher-than-normal task generation or repeated long inputs, to name a few. Authentication and authorization can be used to prevent unauthorized users from interacting with the LLM. Rate limiting on the number of tokens per user should also be used to stop users from burning through an organization’s credits, incurring high costs and using large amounts of computation resulting in latency injection. ... Compliance teams’ concern about sensitive information disclosure is perhaps one of the most severe vulnerabilities limiting LLM adoption. This occurs when models inadvertently can return sensitive information, resulting in unauthorized data access, privacy violations and security breaches. One technique that developers can implement is using specially trained LLM services to identify and either remove or obfuscate sensitive data.


Michael Dell performed a ‘hard reset’ of his company so it could survive massive industry shifts and thrive again. Here’s how it’s done

A hard reset asks and answers a small set of critical strategy questions. It starts with revisiting your beliefs. Discuss and debate your updated beliefs with the team and build a plan to actively test the ones where you disagree or have the most uncertainty about. Next ask what it will take to build a defensible competitive advantage going forward: Determine if you still have a competitive advantage (you probably don’t—otherwise you wouldn’t be in a hard reset). Glean what elements you can use to strengthen and build an advantage going forward. Over-index on the assets you can strengthen and discuss what you will buy or build. Make sure you anchor this in your beliefs around where the world is going. ... During a hard reset, develop rolling three-month milestones set towards a six-month definition of success. Limit these milestones to ten or fewer focused tasks. Remember you are executing these milestones while continuing the reset process and related discussions, so be realistic with what you can achieve and avoid including mere operational tactics on the milestone list.


Software testing’s chaotic conundrum: Navigating the Three-Body Problem of speed, quality, and cost

Companies that prioritize speed over quality end up with the choice of whether to release to market anyway, and risk reputational damage and client churn, or push back timelines and go over budget trying to retrofit quality (which isn’t really possible, by the way). ... Quality is the cornerstone of successful digital products. Users expect software to function reliably, deliver on its promises and provide a seamless user experience. Comprehensive testing plays a large role in making sure users are not disappointed. Developers need to look beyond basic functional testing and consider aspects like accessibility, payments, localisation, UX and customer journey testing. However, investing heavily in testing infrastructure, employing skilled QA engineers and rigorously testing every feature before release is expensive and slow. ... Quality engineers are limited by budget constraints, which can affect everything from resource allocation to investments in tooling. However, underfunding quality efforts can have disastrous effects on customer satisfaction, revenues and corporate reputation. To deliver competitive products within a reasonable timeframe, quality managers need to use available budgets as efficiently as possible. 


Cloud security threats CISOs need to know about

An effective cloud security incident response plan details preparation, detection and analysis, containment, eradication, recovery and post-incident activities. Preparation involves establishing an incident response team with defined roles, documented policies, necessary tools and a communication plan for stakeholders. Detection and analysis require continuous monitoring, logging, threat intelligence, incident classification and forensic analysis capabilities. Containment strategies and eradication processes are essential to prevent the spread of incidents and eliminate threats, followed by detailed recovery plans to restore normal operations. Post-incident activities include documenting actions, conducting root cause analysis, reviewing lessons learned, and updating policies and procedures. ... Organizations should start by doing a comprehensive risk assessment to identify critical assets and evaluate potential risks, such as natural disasters and cyberattacks. Following the assessment, develop and document DR and BC procedures. Annually review and update the procedures to reflect changes in the IT environment and emerging threats.


Artificial Intelligence Versus the Data Engineer

So, how does AI change the role of the data engineer? Firstly, the role of the data engineer has always been tricky to define. We sit atop a large pile of technology, most of which we didn’t choose or build, and an even larger pile of data we didn’t create, and we have to make sense of the world. Ostensibly, we are trying to get to something scientific. A number, a chart, a result that we can stand behind and defend—but like all great science, getting there also needs a bit of art. That art comes in the form of the intuition required to sift through the data, understand the technology, and rediscover all the little real-world nuances and history that over time have turned some lovely clean data into a messy representation of the real world. ... What’s exciting for us beleaguered data engineers is that AI is showing great ability to be a very helpful tool for these hard-to-master skills that will ultimately make us better and more productive at our jobs. We have all, no doubt, seen all the great advancements in AI’s ability to take plain text queries and turn them into increasingly complex SQL, thus lightening the load of remembering all the advanced syntax for whichever data platform is in vogue.


CrowdStrike crash showed us how invasive cyber security software is. Is there a better way?

In the wake of this incident it’s worth considering whether the tradeoffs made by current EDR technology are the right ones. Abandoning EDR would be a gift to cyber criminals. But cyber security technology can – and should – be done much better. From a technical standpoint, Microsoft and CrowdStrike should work together to ensure tools like Falcon operate at arm’s length from the core of Microsoft Windows. That would greatly reduce the risk posed by future faulty updates. Some mechanisms already exist that may allow this. Competing technology to CrowdStrike’s Falcon already works this way. To protect user privacy, EDR solutions should adopt privacy-preserving methods for data collection and analysis. Apple has shown how data can be collected at scale from iPhones without invading user privacy. To apply such methods to EDR, though, we’ll likely need new research. More fundamentally, this incident raises questions about why society continues to rely on computer software that is so demonstrably unreliable. 


6 Pillars Of Entrepreneurial Mastery: Elevating Your Business Through Lifelong Learning

Entrepreneurs with a growth mindset understand that abilities and intelligence can be developed through dedication and hard work. This perspective fosters resilience, helping to navigate setbacks and failures with a constructive attitude. By viewing challenges as opportunities for growth, you can become more adaptable and willing to take calculated risks. Regular self-reflection, seeking feedback and staying open to new ideas are essential practices for cultivating this mindset. ... As an entrepreneur, continuously educate yourself on tax regulations, funding options and financial management best practices. Engaging with online courses, workshops and financial mentors can provide valuable insights and help stay abreast of emerging trends. ... In today's digital age, technology is a major driver of business innovation and efficiency. Entrepreneurs must stay informed about the latest technological advancements relevant to their industry. This encompasses the implementation and utilization of new software, tools, and platforms to streamline operations, enhance productivity, and improve customer experiences.


Software Architecture in an AI World

Programming isn’t software architecture, a discipline that often doesn’t require writing a single line of code. Architecture deals with the human and organizational side of software development: talking to people about the problems they want solved and designing a solution to those problems. That doesn’t sound so hard, until you get into the details—which are often unspoken. Who uses the software and why? How does the proposed software integrate with the customer’s other applications? How does the software integrate with the organization’s business plans? How does it address the markets that the organization serves? Will it run on the customer’s infrastructure, or will it require new infrastructure? On-prem or in the cloud? How often will the new software need to be modified or extended? ... Every new generation of tooling lets us do more than we could before. If AI really delivers the ability to complete projects faster—and that’s still a big if—the one thing that doesn’t mean is that the amount of work will decrease. We’ll be able to take the time saved and do more with it: spend more time understanding the customers’ requirements, doing more simulations and experiments, and maybe even building more complex architectures.


Edge AI: Small Is the New Large

The technologies driving these advancements include AI-enabled chips, NPUs, embedded operating systems, the software stack and pre-trained models. Collectively, they form a SoC - system on chip. Software, hardware and applications are key to enabling an intelligent device at the edge. The embedded software stack in the chip brings it all together and makes it work. Silicon Valley-based embedUR specializes in creating software stacks for bespoke edge devices, acting as a "software integrator" that collaborates closely with chip manufacturers to build custom solutions. "We have the ability to build managed software, as well as build individual software stacks for small, medium and large devices. You can think of us as a virtual R&D team," Subramaniam said. ... OpenAI released a smaller version of the ChatGPT language model called GPT-4o mini, set to be 60% cheaper than GPT-3.5. But smaller does not mean less powerful, in terms of AI processing. Despite their smaller size, SMLs possess substantial reasoning and language understanding capabilities. For instance, Phi-2 has 2.7 billion parameters, Phi-3 has 7 billion, and Phi-3 mini has 3.8 billion.


Reflecting on Serverless: Current State, Community Thoughts, and Future Prospects

The great power of serverless is that starting with and becoming productive is much easier. Just think how long it would take a developer who has never seen either Lambda or Kubernetes to deploy a Hello World backend with public API on both. As you start building more realistic production applications, the complexity increases. You must take care of observability, security, cost optimization, failure handling, etc. With non-serverless, this responsibility usually falls on the operations team. With serverless, it usually falls on developers, where there is considerable confusion. ... Issues like serverless testing, serverless observability, learning to write a proper Lambda handler, dealing with tenant isolation, working with infrastructure as code tools (too many AWS options—SAM, CDK, Chalice, which one to choose and why?), and learning all the best practices overwhelm developers and managers alike. AWS has published articles on most topics, but there are many opinions, too many 'hello world' projects that get deprecated within six months, and not enough advanced use cases. 



Quote for the day:

"You are the only one who can use your ability. It is an awesome responsibility." -- Zig Ziglar