Daily Tech Digest - December 23, 2023

How LLMs made their way into the modern data stack in 2023

Beyond helping teams generate insights and answers from their data through text inputs, LLMs are also handling traditionally manual data management and the data efforts crucial to building a robust AI product. In May, Intelligent Data Management Cloud (IDMC) provider Informatica debuted Claire GPT, a multi-LLM-based conversational AI tool that allows users to discover, interact with and manage their IDMC data assets with natural language inputs. It handles multiple jobs within the IDMC platform, including data discovery, data pipeline creation and editing, metadata exploration, data quality and relationships exploration, and data quality rule generation. Then, to help teams build AI offerings, California-based Refuel AI provides a purpose-built large language model that helps with data labeling and enrichment tasks. A paper published in October 2023 also shows that LLMs can do a good job at removing noise from datasets, which is also a crucial step in building robust AI. Other areas in data engineering where LLMs can come into play are data integration and orchestration. 


Corporate governance in 2023: a year in review

2023 has seen a continuing trend of more responsibilities for directors. Often, this responsibility comes from regulators; sometimes, it comes from investors or other stakeholders. One thing is certain, though: directors are rapidly losing any remaining wiggle room to be “rubber-stamp” individuals. Modern board roles carry serious accountability; many directors are starting to appreciate that and adhere to new standards. The trouble is sometimes the new standard overstretch the director – so much so that we now have concerns about overboarding, exhaustion, and undue stress. How will that play out if the trend of more responsibility continues? ... The board dismissed the evidently popular CEO Sam Altman in a decision made behind closed doors with utmost secrecy. And as the world’s attention predictably turned their way, they could give no answers. Soon, Altman was rehired after around 70% of the company’s staff threatened to resign and join Microsoft (a significant OpenAI investor). The board subsequently agreed to undergo a major reshuffle for more accountability and transparent decision-making.


Quantum Computing’s Hard, Cold Reality Check

The problem isn’t just one of timescales. In May, Matthias Troyer, a technical fellow at Microsoft who leads the company’s quantum computing efforts, co-authored a paper in Communications of the ACM suggesting that the number of applications where quantum computers could provide a meaningful advantage was more limited than some might have you believe. “We found out over the last 10 years that many things that people have proposed don’t work,” he says. “And then we found some very simple reasons for that.” The main promise of quantum computing is the ability to solve problems far faster than classical computers, but exactly how much faster varies. There are two applications where quantum algorithms appear to provide an exponential speed up, says Troyer. One is factoring large numbers, which could make it possible to break the public key encryption the internet is built on. The other is simulating quantum systems, which could have applications in chemistry and materials science. Quantum algorithms have been proposed for a range of other problems including optimization, drug design, and fluid dynamics. 


Navigating the Data Landscape: The Crucial Role of Data Governance in Today’s Business Environment

Data quality management has become increasingly paramount as the volume of data exponentially raises day by day. Organizations can protect their data with policies and procedures, ensure that they follow all the rules and regulations, hire folks that understand the data you are collecting and what it means to their company but if that data isn’t high quality your organization may get the short end of the stick. Maybe you’re three weeks late for a TikTok trend or you miss out on a whole subset of customers because of the misstep with your collection methods, either way that profit loss and a chance to build on that data point in the future could be a pivotal misstep. Ensuring that your organization has processes to monitor and improve your data quality on a continuous basis will save your organization time and money in the long run. Despite its importance, implementing effective data governance comes with challenges. Organizations often face resistance to change, cultural barriers, and the complexity of managing diverse data sources.


Choosing Between Message Queues and Event Streams

There are numerous distinctions between technologies that allow you to implement event streaming and those that you can use for message queueing. To highlight them, I will compare Apache Kafka and RabbitMQ. I’ve chosen Kafka and RabbitMQ specifically because they are popular, widely used solutions providing rich capabilities that have been extensively battle-tested in production environments. ... Message queueing and event streaming can both be used in scenarios requiring decoupled, asynchronous communication between different parts of a system. For instance, in microservices architectures, both can power low-latency messaging between various components. However, going beyond messaging, event streaming and message queueing have distinct strengths and are best suited to different use cases. ... Message queueing is a good choice for many messaging use cases. It’s also an appealing proposition if you’re early in your event-driven journey; that’s because message queueing technologies are generally easier to deploy and manage than event streaming solutions. 


5G and edge computing: What they are and why you should care

Instead of relying solely on large, high-powered cell towers (as 4G does), 5G will run off both those towers and a ton of small cell sites that can be clustered together. This is how 5G achieves its population density. 5G is also supposed to be more energy efficient. As such, the communications component of IoT devices won't drain as much power, resulting in longer battery life for connected devices. There's also a ton of AI and machine learning in 5G implementations. 5G nodes and interface devices deployed on the edge, away from central hubs. They utilize AI and machine learning to analyze communications performance, and use AI to bandwidth-shape communications, to wring as much performance out of the hardware as possible. You're familiar with the term "cloud computing." We've all used cloud services, services that run on a server someplace rather than on our desktop computers or mobile devices. The cloud, of course, isn't really a cloud. Amazon, Google, Facebook, Microsoft, and others operate massive data centers packed with thousands upon thousands of servers. Soft and fluffy, the cloud is not.


Stolen Booking.com Credentials Fuel Social Engineering Scams

Social engineering expert Sharon Conheady said this type of trickery remains extremely difficult to repel, because of the customer-first nature of hospitality. Many public-facing people in such organizations, such as receptionists, are "trained to help people - that's their job," and of course they're going to bend over backwards to try to meet apparent customers' demands, Conheady said in an interview at this month's Black Hat Europe conference in London. Help desks remain another frequent target. "I had a client lately who asked me to call the help desk and obtain BitLocker keys," she said, referring to a recent penetration test. "Every single one of the help desk agents gave us the BitLocker key." That prompted her to ask: Do these personnel even know what a BitLocker key is, and why they shouldn't share it? The client said they didn't know. While training people in customer-facing roles can help, Conheady said the only truly effective approach would be to put in place strong technical controls to outright prevent and block such attacks.


Significantly Improving Security Posture: A CMMI Case Study

“Phoenix Defense has led the way in adopting CMMI Security best practices for nearly two decades, and now included the Security best practices,” says Kris Puthucode, Certified CMMI High Maturity Lead Appraiser at Software Quality Center LLC. “This adoption has yielded quantifiable benefits, enhancing security posture across Mission, Personnel, Physical, Process, and Cybersecurity domains. Additionally, incorporating Virtual work best practices has standardized virtual meetings and events, boosting efficiency.” Phoenix Defense has been a CMMI Performance Solutions Organization since 2005, first achieving Maturity Level 5 in 2020. ... Before adopting CMMI Security and Managing Security Threats and Vulnerabilities Practice Areas in the model, Phoenix Defense had a closed network with no outward-facing applications and relied on a third-party vendor to monitor threats and spam. They did not fully, quantitively track attacks against the networks or other data flows, and they required a more robust approach to properly ensure network security.


5 common data security pitfalls — and how to avoid them

While regulations like GDPR and SOX set standards for data security, they are merely starting points and should be considered table stakes for protecting data. Compliance should not be mistaken for complete data security, as robust security involves going beyond compliance checks. The fact is that many large data breaches have occurred in organizations that were fully compliant on paper. Moving beyond compliance requires actively identifying and mitigating risks rather than just ticking boxes during audits. ... Data is one of the most valuable assets for any organization. And yet, the question, “Who owns the data?” often leads to ambiguity within organizations. Clear delineation of data ownership and responsibility is crucial for effective data governance. Each team or employee must understand their role in protecting data to create a culture of security. ... Unpatched vulnerabilities are one of the easiest targets for cyber criminals. This means that organizations face significant risks when they can’t address public vulnerabilities quickly. Despite the availability of patches, many enterprises delay deployment for various reasons, which leaves sensitive data vulnerable.


Outmaneuvering AI: Cultivating Skills That Make Algorithms Scratch Their Head

Reasoning, the intellectual ninja of skills, is all about slicing through misinformation, assumptions, and biases to get to the heart of the matter. It’s not just drawing conclusions, but thinking about how we do that. This skill is the brain’s bouncer, keeping cognitive fallacies and hasty generalizations at bay. We humans, bless our hearts, are prone to jumping on the bandwagon or seeing patterns where there are none (like seeing a face on Mars or believing in hot streaks at Vegas). These mental shortcuts, or heuristics, can lead us astray, making reasoning not just useful but essential. AI is trained on our past reasoning reflected in old works. But it can’t reason on its own — at least not yet. Consider a business deciding whether to invest in a new technology. Without proper reasoning, they might follow the hype (everyone else is doing it!) or rely on gut feelings (it just feels right!). But with reasoning, they dissect the decision, weigh the evidence, consider alternatives, and make a choice that’s not just good on paper, but good in reality.



Quote for the day:

"Whether you think you can or you think you can’t, you’re right." -- Henry Ford

Daily Tech Digest - December 22, 2023

Healthcare Organisations Embrace New Technologies to Fortify Cyber Defences

Healthcare organisations have initiated partnership with others to develop security operations centres to monitor their traffic and identify threats. Proactive programs like threat hunting and brand monitoring have also been preferred. ... These initiatives are being taken keeping in mind the requirements from CERT-IN to report cyber incidents within six hours, and new requirements under Digital Personal Data protection Act, 2023, which require the organisation to take measures to identify sources of data, take consent and manage the use and eventual destruction of data as per the guidelines given by the government. “Investments in advanced IAM technologies are becoming paramount, encompassing robust authentication methods, privileged access controls, and continuous monitoring of user activities,” says Pramod Bhaskar, CISO, Cross Identity. These measures align closely with regulatory changes and compliance requirements, as regulations like HIPAA increasingly emphasise the importance of secure user authentication, access governance, and audit trails in safeguarding patient information.


The Window of Exposure: A Critical Component of Your Cybersecurity Strategy

The goal of any responsible security professional is to reduce the window of exposure as much as possible. There are two basic approaches to this: limiting the amount of vulnerability information available to the public and reducing the window of exposure in time by issuing patches quickly. Limiting the amount of vulnerability information available to the public might work in theory, but it is impossible to enforce in practice. There is a continuous stream of research in security vulnerabilities, and most of this research results in public announcements. Hackers write new attack exploits all the time, and the exploits quickly end up in the hands of malicious attackers. While some researchers might choose not to publish a vulnerability they discover, public dissemination of vulnerability information is the norm because it is the best way to improve security. Reducing the window of exposure in time by issuing patches quickly is the other approach. Full-disclosure proponents publish vulnerabilities far and wide to spur vendors to patch faster. 


MLflow vulnerability enables remote machine learning model theft and poisoning

Many developers believe that services bound to localhost — a computer’s internal hostname — cannot be targeted from the internet. However, this is an incorrect assumption according to Joseph Beeton, a senior application security researcher at Contrast Security, who recently held a talk on attacking developer environments through localhost services at the DefCamp security conference. Beeton recently found serious vulnerabilities in the Quarkus Java framework and MLflow that allow remote attackers to exploit features in the development interfaces or APIs exposed by those applications locally. The attacks would only require the computer user to visit an attacker-controlled website in their browser or a legitimate site where the attacker managed to place specifically crafted ads. Drive-by attacks have been around for many years, but they are powerful when combined with a cross-site request forgery (CSRF) vulnerability in an application. In the past hackers used drive-by attacks through malicious ads placed on websites to hijack the DNS settings of users’ home routers.


Chameleon Android Trojan Offers Biometric Bypass

The variant includes several new features that make it even more dangerous to Android users that its previous incarnation, including a new ability to interrupt the biometric operations of the targeted device, the researchers said. By unlocking biometric access (facial recognition or fingerprint scans, for example), attackers can access PINs, passwords, or graphical keys through keylogging functionalities, as well as unlock devices using previously stolen PINs or passwords. "This functionality to effectively bypass biometric security measures is a concerning development in the landscape of mobile malware," according to Threat Fabric's analysis. ... The malware's key new ability to disable biometric security on the device is enabled by issuing the command "interrupt_biometric," which executes the "InterruptBiometric" method. The method uses Android's KeyguardManager API and AccessibilityEvent to assess the device screen and keyguard status, evaluating the state of the latter in terms of various locking mechanisms, such as pattern, PIN, or password.


The Rise of AI-Powered Applications: Large Language Models in Modern Business

AI and LLMs have fundamentally altered how people and organizations interact with technology. While they drive innovation and automation across multiple sectors simultaneously, they also change how professionals make decisions and communicate with customers. They have redefined industry-specific domains while enhancing industrial growth and innovation potential. With further development and research, it is only a matter of time before these AI-driven models can replicate the qualities of human speech and interaction. There is no certainty as to the extent of AI developments and capabilities. While the potential for innovation and development seems endless, AI’s rapid growth in business and industry proves that developers have only reached the tip of the iceberg. As AI functionalities become faster and more proficient, the healthcare, education, and financial service industries will thrive further and deliver trustworthy, reliable care and services for patients, students, and customers worldwide. Because LLMs offer operational support in data and analytics, there will be cost savings as professionals transfer their time and efforts elsewhere. 


NIST Seeks Public Comment on Guidance for Trustworthy AI

This is the first time there has been an "affirmative requirement" for companies developing foundational models that pose a serious risk to national security, economic security, public health or safety to notify the federal government when training their models, and to share the results of red team safety tests, said Lisa Sotto, partner at Hunton Andrews Kurth and chair of the company's global privacy and cybersecurity practice. This will have a "profound" impact on the development of AI models in the United States, she told Information Security Media Group. While NIST does not directly regulate AI, it helps develop frameworks, standards, research and resources that play a significant role in informing the regulation and the technology's responsible use and development. Its artificial intelligence risk management framework released earlier this year seeks to provide a comprehensive framework for managing risks associated with AI technologies. Its recent report on bias in AI algorithms seeks to help organizations develop potential mitigation strategies, and the Trustworthy and Responsible AI Resource Center, launched in March, is a central repository for information about NIST's AI activities.


Why laptops and other edge devices are being AI-enabled

You can run them in the cloud, but as well as the inevitable latency this involves, it’s also increasingly costly both in terms of network bandwidth and cloud compute costs. There’s also the governance issue of sending all that potentially-sensitive and bulky data to and fro. So at the very least, doing a first-cut and filter to reduce and/or sanitise the transmitted data volume, is valuable in all sorts of ways. You could use the GPU or even the CPU to do this filtering, and indeed that’s what some edge devices will be doing today. Alternatively you could simply run the inferencing work on the local CPU or GPU in your laptop or desktop. That works, but it’s slower. Not only can dedicated AI hardware such as an NPU do the job much faster, it will also be much more power-efficient. GPUs and CPUs doing this sort of work tend to run very hot, as evidenced by the big heatsinks and fans on high-end GPUs. That power-efficiency is useful in a desktop machine, but is much more valuable when you’re running an ultraportable on battery, yet you still want AI-enhanced videoconferencing, speedy photo editing, or smoother gaming and AR.


Future of wireless technology: Key predictions for 2024

New IoT technology will help unify connectivity across multiple home devices, transforming home users’ experience with IoT devices. Matter— a new industry standard launched in 2023 provides reliable, secure connectivity across multiple device manufacturers. Given the weight of players involved (e.g., Apple, Amazon, Google, Samsung SmartThings), we expect the adoption of Matter-certified products will be exponential in the next three years, validating Wi-Fi’s central role in the smart connected home and buildings. Pilot projects and trials of TIP Open Wi-Fi will proliferate in developing countries and price-sensitive markets due to its cost-effectiveness and the benefits offered by an open disaggregated model. Well-established wireless local-area network (WLAN) vendors will continue working to make themselves more cost-effective in these markets through massive investment in machine learning and AI and an integrated Wi-Fi + 5G offering to enterprises. Augmented and virtual reality will gain a larger share of our daily lives at home and work


What developers trying out Google Gemini should know about their data

Google told ZDNET that it uses the API inputs and outputs to improve product quality. "Human review is a necessary step of the model improvement process," a spokesperson said. "Through review and annotation, trained reviewers help enable quality improvements of generative machine-learning models like the ones that power Google AI Studio and the Gemini Pro via the Gemini API." To protect developers' privacy, Google said their data is de-identified and disassociated from their API key and Google account, which is needed to log in to Google AI Studio. This protection takes place done before the reviewers can see or annotate the data. Google's Terms of Service (ToS) for its generative AI APIs further states that the data is used to "tune models" and may be retained in connection to the user's tuned models "[for] re-tuning when supported models change". The ToS states: "When you delete a tuned model, the related tuning data is also deleted." The terms also state that users should not submit sensitive, confidential, or personal data to the AI models.


14 in-demand cloud roles companies are hiring for

As cloud computing grows increasingly complex, cloud architects have become a vital role for organizations to navigate the implementation, migration, and maintenance of cloud environments. These IT pros can also help organizations avoid potential risks around cloud security, while ensuring a smooth transition to the cloud across the company. With 65% of IT decision-makers choosing cloud-based services by default when upgrading technology, cloud architects will only become more important for enterprise success. ... DevOps focuses on blending IT operations with the development process to improve IT systems and act as a go-between in maintaining the flow of communication between coding and engineering teams. It’s a role that focuses on the deployment of automated applications, maintenance of IT and cloud infrastructure ... Security architects are responsible for building, designing, and implementing security solutions in the organization to keep IT infrastructure secure. For security architects working in a cloud environment, the focus is on designing and implementing security solutions that protect the business’ cloud-based infrastructure, data, and applications.



Quote for the day:

"The meaning of life is to find your gift. The purpose of life is to give it away." -- Anonymous

Daily Tech Digest - December 21, 2023

The New HR Playbook: Catalyze Innovation With Analytics And AI

Metaverse and blockchain technologies — underpinned by data and AI — also offer a lot of possibilities for improving HR practices. The metaverse, a shared virtual space bridging physical and digital realities, offers avenues for remote workspaces and virtual collaboration. It can enhance recruitment, onboarding, training, and development processes by providing immersive and interactive experiences that engage candidates and employees on a new level. The metaverse could also help companies with decentralized teams cultivate a strong organizational culture by giving employees a shared virtual space for interaction and engagement. Blockchain technology offers transparency and security that can have profound implications for HR processes. HR departments can use blockchain to improve the security of record-keeping, verify employee credentials, and simplify benefits administration. Blockchain can also streamline payroll processes, especially for international employees. Companies can even use blockchain to create decentralized, employee-driven platforms for collaboration and communication.


Why 2024 will be the year of the CISO

As the ESG/ISSA research indicates, many fed-up CISOs will retire, while others will move on to become virtual CISOs (vCISOs) or take field CISO positions with security technology vendors. We'll read numerous stories next year about CISOs up and quitting on the spur of the moment. While the reasons won't be disclosed, you can bet they are among those cited above. Competition for qualified candidates will be fierce. On a side note, I don't believe there is a significant population of next-generation CISO candidates with the right experience to step up. In 2024, we will augment our general discussion of the global cybersecurity skills shortage with a specific addendum about the CISO shortage. CISO pay and compensation will rise precipitously. Aside from a handful of $1 million positions, CISOs aren't paid nearly as much as one might assume. Salary.com calculates a median salary of about $241,000 with 90% of CISOs making $302,000 or less. Given the job requirements (long hours, stress, being on-call, etc.), this isn't very much. With the competition for candidates, firms will greatly increase base pay, perks, and bonuses, leading to hyper CISO salary inflation.


Hot Jobs in AI/Data Science for 2024

“The new and highly specialized role known as the ‘LLM Engineer’ is primarily found within organizations that have reached an advanced stage in their AI journey, having conducted numerous experiments but now facing challenges in the operationalization of their AI models at scale,” says Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab. ... “Some of the most sought-after AI positions today include machine learning engineer, AI engineer, and AI architect,” says Shmuel Fink, Chair Master of Science in data analytics, Touro University Graduate School of Technology. “Nevertheless, several other AI roles are also gaining prominence, such as AI ethicist, AI product manager, AI researcher, computer vision engineer, robotics engineer, and AI safety engineer. Moreover, there are positions that require industry-specific expertise, like a healthcare AI engineer.” But back at the ranch, employees in any job role will become more valuable if they possess AI skills. As they gain those skills, some specialized job roles will evolve while others disappear.


How Blockchain Will Change Organizations

The fact that blockchain is a distributed database means it is very difficult to delete data. Once something has been recorded on the blockchain, it becomes part of the permanent record. This traceability of data is another key advantage of blockchain technology. The data stored on a blockchain is immutable, meaning that it cannot be changed or deleted. This traceability can be useful for tracking the provenance of goods and tracing the origins of data. It also has implications for compliance, as organizations will be able to show exactly what data they have and where it came from. ... Under the traditional centralized model, organizations have complete control over the data they store. However, individuals have full control over their data with blockchain technology. This is because each user has a private key, which is used to access their data. Individuals have complete control over their data, which is a key advantage of blockchain technology. It means that users can be sure that their data is safe and secure and that they can share it with whomever they choose.


Industry Impact: Celebrating IT's Milestones and Achievements This Year

The integration of AI into various solutions, including observability, IT service management, and database solutions, has allowed for greater automation of the mundane tasks that often bog down IT pros and hinder organizations from accelerating their digital transformations. AI-powered capabilities free up valuable time for IT pros, allowing them to focus on the most important tasks at hand. Autonomous operations, enabled by purpose-built models for IT operations and large language models, are poised to revolutionize IT environments in the coming years, reducing operation costs and bettering the lives of those in the tech workforce. ... The IT industry has a smorgasbord of accomplishments that have enriched the digital lives of organizations this year. The industry’s cloud migration journey, in particular, has played a central role in allowing organizations to scale their operations and pivot rapidly in response to market conditions. The cloud journey has transformed the way businesses operate, offering scalability, flexibility, and cost-efficiency. 


An IT Carol: How the Ghosts of IT Past and Present Can Help Improve the Future

You see yourself sitting at your desk, frantically trying to juggle more service desk tickets than you ever thought were possible. The trip to the future also shows the vast number of new complex systems that teams are using. As applications, networks, databases, and infrastructures grew in complexity, so did the tools and solutions we need to manage them. This has created a future where IT pros are trying to navigate and manage some of the most complex systems and environments imaginable. Teams are more overworked than ever before. You spend so much time fighting fires that you have no time to build better technology that provides important new capabilities. You have almost no time to think about anything else, let alone spend the holiday with family or friends. Thankfully, this is not a future that has to be, but rather one we can avoid if we take the right steps today. Right now, we are on the path to improving the lives of IT teams through the integration of artificial intelligence (AI). IT solutions powered by AI, such as observability and ITSM, can help manage the complex IT environments we are witnessing through ongoing digital transformation and the move to the cloud.


Why data, AI, and regulations top the threat list for 2024

Some of the essential questions security teams ought to be asking themselves include: How do we manage and safeguard aspects like confidentiality, integrity, and availability of data? What strategies can we employ to protect our data against cyber threats and misuse? How do we address the security challenges that emerge with expanding data repositories? How do we differentiate between valuable data and redundant information? Furthermore, there’s often a misalignment in how data is structured versus the business framework. Consequently, security teams may need to engage in discussions with business units to clarify issues such as how we are applying our data. With whom is this data being shared? ... Although AI technologies aren’t new, the recent widespread adoption of AI has introduced a myriad of business and security challenges for organizations. Key questions to consider include: How do we monitor AI usage within the organization? How do we regulate the data shared with AI systems by employees? How do we ensure ongoing compliance with ethical standards and legal requirements?


2023 - The year of transformation and harmonisation

Millennial leaders bring a distinctively dynamic, digitalised approach to their roles, characterised by agility, openness, proactiveness, and hands-on engagement. Their adeptness in navigating the digital landscape seamlessly allows them to forge strong connections within their predominantly Gen Z and millennial workforce. This workforce, in turn, embodies an informed, forward-looking, and tech-savvy ethos, driven by cutting-edge technologies that facilitate smart and efficient work practices. In the world of leading-edge technologies, the arrival of Chat GPT by OpenAI in the preceding November continued to take centre stage. Throughout the year, there was a surge in competition and discussions surrounding AI, particularly generative AI, which gained momentum. Amidst these discussions, Google's introduction of Bard added fervour to the debate, igniting intense conversations about the potential impact of generative AI on employment and the perceived threat to various job roles. This stirred a pot of mixed emotions—feelings of anxiety, uncertainty, and ambiguity swirled within the tech sphere.


Small businesses lead the way, while larger industries lag in tech adoption

On the other hand, many leaders in the small and mid-sized industrial sector are in the age group of 50 and above. When they initially embarked on their careers in the core industry, the adoption of IT and technology in their companies was significantly lower. Technology was not as pervasive, and IT integration was often considered an unnecessary expense. For those who did attempt computerisation in the early 2000s, the experience was often disheartening. Small IT companies that provided software solutions during that period often faced challenges and many even disappeared. The owners of these companies, faced with the uncertainty and challenges of running a technology-based business, opted for well-paying jobs instead. This experience left a lasting impact on their perception of technology and its role in business operations. Moreover, the proliferation of the internet and the rise of startups introduced a new paradigm. Many services and software were offered for free or at significantly reduced rates, fostering an expectation of inexpensive or cost-free technology solutions. This demotivated many software company owners from continuing in the business. 


What’s Ahead for AI In 2024: The Transformative Journey Continues

The coming year will see a shift in how generative AI is employed by businesses, with a greater emphasis on using organizational data. Companies are increasingly cautious about sharing sensitive data on public platforms, opting instead to host private foundation models within their four walls. This move is driven by concerns over data security and the desire to customize AI applications to specific organizational needs. By using their own data, companies can ensure that AI output is relevant and in context. This trend will lead to innovative applications of generative AI in a variety of business functions. ... New tuning techniques such as prompt tuning and retrieval augmented generation (RAG) will gain popularity next year. These methods provide more context-specific adjustments to AI models without the need for extensive retraining. Prompt tuning, for example, uses smaller pre-trained models to encode text prompts; RAG combines specific information with prompts to enhance the relevance of the model's output.



Quote for the day:

"People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - December 20, 2023

OpenAI announces ‘Preparedness Framework’ to track and mitigate AI risks

The announcement from OpenAI comes in the wake of several major releases focused on AI safety from its chief rival, Anthropic, another leading AI lab that was founded by former OpenAI researchers. Anthropic, which is known for its secretive and selective approach, recently published its Responsible Scaling Policy, a framework that defines specific AI Safety Levels and corresponding protocols for developing and deploying AI models.The two frameworks differ significantly in their structure and methodology. Anthropic’s policy is more formal and prescriptive, directly tying safety measures to model capabilities and pausing development if safety cannot be demonstrated. OpenAI’s framework is more flexible and adaptive, setting general risk thresholds that trigger reviews rather than predefined levels. ... Experts say both frameworks have their merits and drawbacks, but Anthropic’s approach may have an edge in terms of incentivizing and enforcing safety standards. From our analysis, it appears Anthropic’s policy bakes safety into the development process, whereas OpenAI’s framework remains looser and more discretionary, leaving more room for human judgment and error.


Australian federal government opens consultation on mandatory ransomware reporting obligation for businesses

The government is looking to develop legislation to "encourage" businesses to voluntarily provide information to ASD and the Cyber Coordinator about a cyber incident under a limited basis that would prevent the agencies from using this information for compliance action against the reporting organizations. The idea is to give more information than current regulation requires so the agencies can provide better support when businesses are under attack and to mitigate harms to individuals arising from cyber security incidents. ... Home Affairs t is seeking input from industry on the design and implementation of a cyber incident review board (CIRB). It is proposed that the CIRB would conduct no-fault incident reviews to reflect on lessons learned from cyber incidents, and share these lessons learned with the Australian public. The paper stated that the CIRB would not be a law enforcement, intelligence or regulatory body. It would be allowed to request information related to a cyber incident but would not have powers to compel and organization to do so. 


US Lawmakers Urge Pushback on EU’s Big Tech Crackdown

CIOs, CISOs, and other IT leaders should keep a watchful eye on the EU's regulatory developments, Martha Heller, CEO at executive search firm Heller Search, tells InformationWeek. “The EU’s legislative move to curtail the power of US tech companies is a double-edge sword,” she says in an email interview. “Its mandate that the largest US-based tech companies give users more choice among services could give smaller technology companies a fighting chance. But its bias against US tech companies could limit the US’s ability to compete on the global market.” Heller adds, “As both producers and enterprise consumers of technology, CIOs and CTOs should pay close attention to the EU, as it leverages its watchdog position.” ... For CIOs, keeping track of regulatory considerations is not getting easier moving forward. “You have five big US tech companies that are primarily affected,” Chin-Rothmann says. “You must look at that in context with all of the other digital laws globally. It’s going to be a pretty complex regulatory patchwork. And when the EU regulates, other countries tend to follow suit.


Web injections are back on the rise: 40+ banks affected by new malware campaign

Our analysis indicates that in this new campaign, threat actors’ intention with the web injection module is likely to compromise popular banking applications and, once the malware is installed, intercept the users’ credentials in order to then access and likely monetize their banking information. Our data shows that threat actors purchased malicious domains in December 2022 and began executing their campaigns shortly after. Since early 2023, we’ve seen multiple sessions communicating with those domains, which remain active as of this blog’s publication. Upon examining the injection, we discovered that the JS script is targeting a specific page structure common across multiple banks. When the requested resource contains a certain keyword and a login button with a specific ID is present, new malicious content is injected. Credential theft is executed by adding event listeners to this button, with an option to steal a one-time password (OTP) token with it. This web injection doesn’t target banks with different login pages, but it does send data about the infected machine to the server and can easily be modified to target other banks.


New Malvertising Campaign Distributing PikaBot Disguised as Popular Software

The latest initial infection vector is a malicious Google ad for AnyDesk that, when clicked by a victim from the search results page, redirects to a fake website named anadesky.ovmv[.]net that points to a malicious MSI installer hosted on Dropbox. It's worth pointing out that the redirection to the bogus website only occurs after fingerprinting the request, and only if it's not originating from a virtual machine. "The threat actors are bypassing Google's security checks with a tracking URL via a legitimate marketing platform to redirect to their custom domain behind Cloudflare," Segura explained. "At this point, only clean IP addresses are forwarded to the next step." Interestingly, a second round of fingerprinting takes place when the victim clicks on the download button on the website, likely in an added attempt to ensure that it's not accessible in a virtualized environment. Malwarebytes said the attacks are reminiscent of previously identified malvertising chains employed to disseminate another loader malware known as FakeBat (aka EugenLoader).


SSH shaken, not stirred by Terrapin vulnerability

As the university trio put it this week, a successful Terrapin attack can "lead to using less secure client authentication algorithms and deactivating specific countermeasures against keystroke timing attacks in OpenSSH 9.5." In some very specific circumstances, it could be used to decrypt some secrets, such as a user's password or portions of it as they log in, but this is non-trivial and will pretty much fail in practicality. Let's get to the nitty gritty. We'll keep it simple; for the full details, see the paper. When an SSH client connects to an SSH server, before they've established a secure, encrypted channel, they will perform a handshake in which they exchange information about each other in plaintext. Each side has two sequence counters: one for received messages, and one for sent messages. Whenever a message is sent or received, the relevant sequence counter is incremented; the counters thus keep a running tally of the number of sent and received messages for each side. As a MITM attack, Terrapin involves injecting a plaintext 'ignore' message into the pre-secure connection, during the handshake, so that the client thinks it came from the server and increments its sequence counter for received messages. The message is otherwise ignored.


SMTP Smuggling Allows Spoofed Emails to Bypass Authentication Protocols

Using SMTP Smuggling, an attacker can send out a spoofed email purporting to come from a trusted domain and bypass the SPF, DKIM and DMARC email authentication mechanisms, which are specifically designed to prevent spoofing and its use in spam and phishing attacks. An analysis found that the attack technique could allow an attacker to send emails spoofing millions of domains, including ones belonging to high-profile brands such as Microsoft, Amazon, PayPal, eBay, GitHub, Outlook, Office365, Tesla, and Mastercard. The attack was demonstrated by sending spoofed emails apparently coming from the address ‘admin(at)outlook.com’. However, attacks against these domains are possible — or were possible, because some vendors have applied patches — due to the way a handful of major email service providers set up SMTP servers. The vendors identified by the researchers are GMX (Ionos), Microsoft and Cisco. The findings were reported to these vendors in late July. GMX fixed the issue after roughly 10 days. Microsoft assigned it a ‘moderate severity’ rating and rolled out a patch sometime in the middle of October.


Digital Transformation: Composable Applications And Micro-Engagements

Composable applications are characterized by one simple concept. Organizations are evolving beyond the method of integrating low-level services, and they’re gravitating to consuming higher-level micro-engagements. Micro-engagements are defined as small, repeatable experiences that can be preconfigured and consumed within a larger environment. Organizations are questioning why they need to re-create the wheel (or in this instance, the experience) using low-level services. Why can’t they simply leverage commonly repeatable experiences and lower their overall technical debt while increasing overall agility? ... Once embraced, organizations adopting the composable application mindset will be biased toward vendors who provide use cases or process-specific micro-engagements. Out-of-the-box micro-engagements can be quickly and easily discovered, evaluated, integrated, branded and deployed with minimal effort and risk, and vendors that provide no-code platforms can enable organizations to quickly and easily create their own reusable micro-engagements.


CISO: Your Tech Security Guide

Every business, regardless of size, necessitates a security leader overseeing technology, information, and data security, even if not designated as a CISO. While midsize and larger enterprises commonly appoint a CISO within their C-suite, smaller businesses may delegate such responsibilities to a tech executive like a director of cybersecurity. Some smaller or startup enterprises opt to outsource the CISO role, enhancing protection for their intellectual property, data, and IT infrastructure. ... A CISO’s contribution lies in their comprehensive understanding of security, connecting various security facets with the organization’s IT systems and networks. They leverage this perspective to pinpoint security risks and devise effective management strategies. Successful CISOs adeptly articulate complex security issues in layman’s terms, enabling leadership to grasp the implications. ... Becoming a CISO involves understanding cybersecurity’s technical foundations alongside practical management principles, encompassing people, processes, and technology. Critical attributes include a fervor for information technology, commitment to ongoing learning, adept leadership, familiarity with security standards, and relevant certifications (CISSP, CISM).


Is Your Product Manager Hurting Platform Engineering?

Having a product manager from day one can lower oxygen levels for your platform team. Feedback may be filtered, delayed or misunderstood, massively reducing its value and making good outcomes less likely. Platform engineers should bathe in the full, grainy details of the feedback and use it to enrich their understanding of the tasks their customers are trying to complete — and where they are underserviced when completing those jobs. This helps the platform team create innovative solutions that may solve multiple unmet needs. You don’t have to use the Jobs To Be Done (JTBD) framework here. The crucial detail is that by immersing yourself in the customer’s needs, you can come up with ideas that solve many pain points instead of falling into the feature-factory trap of solving problem after problem. ... While it’s tempting to think ahead to what happens when your platform has achieved total adoption, been spun into a subsidiary organization, and had a conference named after it, it’s worth understanding that scale is not why you’ll add a product manager.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - December 19, 2023

7 Security Trends to Watch Heading into 2024

Cyberattacks led by nation-state threat actors, as well as politically motivated hacktivist groups, will continue in relation to the active conflicts in Ukraine and Gaza. Vanderlee points out that attacks in these regions may have a higher likelihood of kinetic impact. For example, Sandworm, a threat actor linked with Russia, disrupted the power in Ukraine and caused a power outage in late 2022. “Those are definitely things to watch out for, particularly if you do business in those regions or in countries situated around those regions,” says Vanderlee. ... Cloud migration continues to be a significant theme in the IT space. As more organizations embrace a cloud-first approach, threat actors are looking for ways to target hybrid and multi-cloud environments. Mandiant observed threat actors targeting cloud environments and seeking ways to gain persistence and move laterally in 2023, according to Google Cloud’s Cybersecurity Forecast 2024. That trend is likely to bleed over into 2024; threat actors are going to look for ways to exploit cloud misconfigurations and move laterally across multi-cloud environments.


Internet's deep-level architects slam US, UK, Europe for pushing device-side scanning

Client-side scanning has since reappeared, this time on legislative agendas. And the IAB – a research committee for the Internet Engineering Task Force (IETF), a crucial group of techies who help keep the 'net glued together –thinks that's a bad idea. "A secure, resilient, and interoperable internet benefits the public interest and supports human rights to privacy and freedom of opinion and expression," the IAB declared in a statement just before the weekend. "This is endangered by technologies, such as recent proposals for client-side scanning, that mandate unrestricted access to private content and therefore undermine end-to-end encryption and bear the risk to become a widespread facilitator of surveillance and censorship." ... For the IAB and IETF, client-side scanning initiatives echo other problematic technology proposals – including wiretaps, cryptographic backdoors, and pervasive monitoring. "The IAB opposes technologies that foster surveillance as they weaken the user's expectations of private communication which decreases the trust in the internet as the core communication platform of today's society," the organization wrote. 


Zombie Scrum First Aid

Zombie Scrum is on the rise! What may look like Scrum from a distance often turns out to be anything, but when you take a closer look. Although teams go through the motions of Scrum, Sprints don’t result in valuable outcomes, customers are not involved, teams have little autonomy, and nobody is doing anything to improve. The first response to Zombie Scrum might be to panic, run around, and hide below your desk. That doesn’t usually work. So, for our book, the Zombie Scrum Survival Guide, we created a simple poster that tells you exactly what to do in clear and simple language. ... Complaints, cynicism, and sarcasm don’t help anyone. It may even contribute to teams sliding further into Zombie Scrum. Instead, highlight what works well, where improvements occur, and what is possible when you work together. Use humor to lighten the mood, but not sugarcoat the truth. Facilitate the next Sprint Retrospective with the Liberating Structure ‘Appreciative Interviews’. It helps identify enablers for success in less than one hour. By starting from what goes well — instead of what doesn’t — AI liberates spontaneous momentum and insights for positive change as “hidden” success stories are uncovered. 


On-prem vs cloud storage: Four decisions about data location

For the best performance, system architects need to minimise latency between applications and storage. To access cloud storage via the public internet inevitably increases latency. Internet connections are also more prone to variable performance and general reliability issues. This suggests that for best performance, data should be stored on-premise. For the most critical applications, this is still usually the case. But the decision is not always clear cut. “We know that if you start to run compute on a storage bucket across the wire, you are going to have a performance impact,” cautions Paul Mackay, regional vice-president for EMEA and APAC at cloud data firm Cloudera. ... Even so, optimised on-premise storage can still be the cheaper option. As PA’s Gupta points out, much depends on how new the customer’s on-site infrastructure is, and how much life it has left. Cloud storage also has hidden costs. Data egress is frequently cited as a reason for higher than expected bills, but firms can also find they pay more than expected because they store data for extended periods in expensive tiers rather than dedicated cloud archives. Again, careful application design and a clear picture of data use will minimise this.


Parallels Between Open Source and Fully Remote Team Setups

In open source and remote work, digital communication is the vital link uniting individuals, fostering collaboration and understanding. Beyond information transfer, it builds relationships and transcends cultural differences. Contributors in open source projects require effective digital communication for diverse backgrounds. Platforms like GitHub offer not just code repositories but crucial discussion spaces. Remote work tools like Slack and Zoom create a virtual office, addressing the challenge of sustaining connections. Clarity counters miscommunication, while video meetings provide a personal touch, supporting empathetic communication. Inclusive digital communication ensures accessibility, involving all contributors. ... Open source communities epitomize meritocracies, fostering diversity and innovation by evaluating contributions solely on merit. In remote work, meritocracy shifts the emphasis from productivity to quality and impact, allowing introverted individuals to shine based on tangible outputs, fostering an objective assessment. While offering advantages, challenges include potential “echo chamber” effects and the risk of overlooking diverse contributions. 


The impact of prompt injection in LLM agents

Addressing prompt injection in LLMs presents a distinct set of challenges compared to traditional vulnerabilities like SQL injections. In these types of scenarios, the structured nature of the language allows for parsing and interpretation into a syntax tree, making it possible to differentiate between the core query (code) and user-provided data, and enabling solutions like parameterized queries to handle user input safely. In contrast, LLMs operate on natural language, where everything is essentially user input with no parsing into syntax trees or clear separation of instructions from data. This absence of a structured format makes LLMs inherently susceptible to injection, as they cannot easily discern between legitimate prompts and malicious inputs. Any defensive and mitigation strategies should be created with the assumption that attackers will eventually be able to inject prompts successfully. Firstly, enforcing stringent privilege controls ensures LLMs can access only the essentials, minimizing potential breach points. We should also incorporate human oversight for critical operations to add a layer of validation to safeguard against unintended LLM actions


Navigating cloud concentration and AI lock-in

Although you can choose to reduce the use of a specific cloud provider, it is sometimes nearly impossible to move some applications to other platforms. This is due to the coupling of those applications to the cloud platform and the economic inability to get them off those platforms. To guard against the risks associated with cloud concentration and AI lock-in, IT leaders are exploring strategies to reduce dependency on a single cloud provider. This can include leveraging single-tenant cloud solutions, colocation companies, and hybrid cloud strategies to diversify their cloud deployment and infrastructure. As IT leaders navigate the complex landscape of cloud concentration risks and AI lock-in, it is evident that an agile approach to cloud strategy and AI adoption is mandatory. Organizations can mitigate risks by understanding the nuanced considerations of vendor selection, fostering a multicloud approach, and embracing innovative technologies. At the end of the day, keep your eyes open for the fully optimized solution, and do not focus on just a single cloud provider’s services, including AI.


Will Putting a Dollar Value on Vulnerabilities Help Prioritize Them?

Whether the focus on impact makes VISS any more valuable than other scoring systems is a matter of debate. Any scoring systems should not just replicate what others are already doing, and VISS seems to try to cover some new ground — at least in terms of scope, says Brian Martin, vulnerability historian at Flashpoint, a threat intelligence firm. "Do we need another scoring system? No, but kind of yes," he says. "On one hand, we have too many SSes. We have CVSS version 2, version 3, version 4, we have EPSS, we have the ransomware prediction scoring system — So I'm skeptical, but if it is more direct and to be utilized for a single purpose, such as bug bounties, then I can see it being beneficial." However, companies should not expect prioritizing vulnerabilities using VISS to be any easier than it is with other systems. While VISS may be simpler to calculate, it still requires knowledgeable answers to assign the right level of risk to vulnerabilities, says Tim Jarrett ... "Scoring models are not are not silver bullets," he says. "You actually have to adopt them and use them and feed them. And I think that what this does not do is make the problem of prioritizing vulnerabilities any less labor intensive."


9 tips for achieving IT service delivery excellence

To achieve maximum efficiency, Cziomer also suggests focusing service efforts on DevOps Research and Assessment (DORA) metrics, such as “lead time for change” and “time to restore service.” Customer-centric Net Promoter Scores are equally important, he adds. “To dive deeper into understanding our services, I employ methods like value stream mapping to pinpoint bottlenecks or inefficiencies,” says Cziomer, who feels that proactive approaches such as these enable IT organizations to consistently elevate their service levels. ... Effective IT service delivery begins by creating and standardizing processes and documentation, says Patrick Cannon, field CTO at data center and cloud services firm US Signal. Standardization ensures a consistent end-user experience with outcomes that adhere to established security policies. “It’s also beneficial for effective training and new IT staff onboarding,” he says, adding that when IT understands the needs of each business unit, it opens the way to a more proactive service approach, reducing downtime and fostering innovation.


Architecting for Resilience: Strategies for Fault-Tolerant Systems

A fault-tolerant system can keep working properly even when things go wrong. Faults are any issues that make a system behave differently than expected. Faults can be caused by hardware failure, software bugs, human errors, or environmental factors like power outages. And in complex systems with a lot of services and sub-services, hundreds of servers, and distributed in different Data Centers minor issues happen all the time. ... Testing plays a key role in building resilient, fault-tolerant systems. Testing helps identify and address potential weaknesses before they cause real failures or outages. There are various testing methods focused on resilience, including chaos engineering, stress testing, and load testing. These techniques simulate realistic failure scenarios like hardware crashes, traffic spikes, or database overloads. The goal is to observe how the system responds and find ways to improve fault tolerance. Testing validates whether redundancy, failover, replication, and other strategies work as intended. All big IT companies practice resilience testing. And Netflix is leading here. 



Quote for the day:

"Perhaps the ultimate test of a leader is not what you are able to do in the here and now - but instead what continues to grow long after you're gone" -- Tom Rath