Daily Tech Digest - October 18, 2023

Implementing an Effective Data Strategy

What challenges must a data strategy overcome: Creating a data culture? Building the data business case? Or fixing data issues? Martin Davis, CIO of Southern Company, said, “It is all of those, but it starts with data ownership. Once you have the right business ownership, you can work on the culture, the business case, and other things.” Jim Russell, CIO for Manhattanville College, claimed that with ownership established, “What most organizations are lacking are foundational skills in the workforce. As competing knowledge requirements have intensified, fewer employees seem to have data literacy or data fluency. For this reason, I’ve been pushing data literacy as a foundational requirement with expertise resulting in data fluency which means different things in different campus communities. ... Obviously, a smart data strategy comes from business and digital strategy. For this reason, Russell said, “It is important to start with a common vision that spans data products and services. With this, CIOs should help teams define vision and create clear scaffolding that overarching vision...”


The evolution of deception tactics from traditional to cyber warfare

There are many concerns when determining the next steps in responding to a cyber incident or attack that require careful navigation of ethics, further underscoring the importance of international governance and regulations. An escalatory response to a cyberattack, such as a “hack back” or “attack back,” raises legal and ethical questions if such action could lead to a larger conflict. Because cyber attackers are becoming more skilled at hiding their true identities, there is indeed cause for concern about whether a response could lead to retaliatory actions and collateral damage against innocent parties. Additionally, the intentions of the original attacker could be misidentified by the victim, leading to disproportionate or unneeded attacks. ... This necessitates a cyber defense strategy that doesn’t just block or react, but one that is also designed to seek out attackers’ motives and identities. It’s a tale as old as time in the military world—if you understand your opponent’s motives, you have the upper hand.


Developers and the AI Job Wars: Here's How Developers Win

“Software development is less about writing software and more about understanding the problem you are trying to solve,” says Louis Lang, CTO and co-founder of Phylum. “While the likes of ChatGPT and Copilot might make the writing process quicker, it has a long way to go before it can reason through a novel problem domain. Making development faster with AI only applies to scaffolding new projects and writing well-trodden code and even this seems problematic from time to time. If you try to produce something that requires deep expertise, AI will not help you.” But what jobs it does destroy, it replaces with new roles. For example, AI is itself software and as such requires developers. “With the rise of generative AI, software developers play a pivotal role in designing, building, and maintaining the underlying infrastructure that powers AI applications,” says Adam Prout, CTO of SingleStore, a cloud-native database. “Their position is vital to implementing algorithms, creating data pipelines, and optimizing models in close collaboration with data scientists and machine learning engineers. The expertise of a software developer is integral to bringing AI projects from conceptualization to real-world deployment.”


It’s time for cloud tech to meet operational tech at industrial sites

Industrial sites’ challenges can be daunting, but advances in cloud computing—particularly in security and edge computing—have come a long way. Some industrial sites are already adopting standards in site data collection, such as OPC Unified Architecture (OPC UA), a machine-to-machine communication protocol that allows control systems to exchange data securely and consistently. ... Edge computing can store a subset of data at a site, and in some cases can even provide cloud compute capabilities, thus allowing sites to continue to use cloud capabilities even if network connectivity is lost. Of course, the corresponding edge computing architectures—the amount of computing needed to store and process data before sending it to the cloud—will vary based on the size of the connectivity gap, the amount of data to be transferred, and the use of digital assets, such as sensors and recording devices. Edge computing also manages data’s return trip from the cloud to sites, making cloud-dependent, on-site applications faster and more reliable, since it reduces reliance on network connectivity.


Opportunities and Limitations of Deploying Large Language Models in the Enterprise

The progress we’ve seen in the last few months is nothing short of impressive. While natural language understanding and processing is not net-new, it’s now much more accessible. Not to mention that models have gone from 0 to 60 in terms of depth and capabilities. But, for many CIOs, the value may not be immediately obvious. Many organizations have been slashing budgets in the last year and making blind investments is not in their agenda. ... Large Language Models (LLMs) like GPT-4 are based on neural networks, which are inherently probabilistic in nature. This means that given the same input, they might produce slightly different outputs each time due to the randomness in the model’s architecture or during the training process. This is what we mean when we say LLMs are “non-deterministic.” ... Despite these challenges, there are ways to manage the non-deterministic nature of LLMs, such as using ensemble methods, applying post-processing rules or setting a seed for the randomness to get repeatable results.


How to get internal employee poaching right

Even if your company has an open culture, it’s critical to develop cooperative relationships with managers in other departments because losing a top performer isn’t easy for anyone. Nevertheless, if a user department manager recognizes an employee’s interest in transfering to IT, and you have a strong working relationship with that manager, internal hiring can go a lot more smoothly. ... At some companies, poaching an employee from another departments is considered unethical and underhanded. Regardless, internal employee poachingcan certainly be an issue if you actively recruit another department’s employee without letting the other department manager know. It is vital to know up front the actions and behaviors that are acceptable within your company before you start recruiting another department’s employee. For instance, in some cases, it is acceptable for an employee to be “loaned out” from one department to another for the duration of a specific one-off project. Such a policy helps provide temporary resources for projects while enabling employees on loan to gain knowledge and cross-train in another discipline. 


The Never-Ending Battle: Routine Patching vs. Operational Stability

There’s an on-going battle between competing priorities being waged every day in enterprises globally, and it’s been going on for decades. Cyber security teams are concerned with unpatched vulnerabilities and the breaches they risk, while IT professionals are driven by operational availability, the lack of which jeopardizes the business’ ability to operate. In today’s world, operational stability is winning, to the delight of threat actors everywhere. ... Existing vulnerability management strategies and tools focus only on the prioritization of risk. While that helps organizations identify which vulnerabilities they should attend to first, the actual orchestration of remediation is often ignored entirely. Remediation efforts must be handled discretely by different tools, processes, and teams, often with little to no continuity between them. Further, tangibly demonstrating the efficacy and progress of a vulnerability and patch management program is a massive undertaking. With each new vulnerability and patch, individual teams tackle discovery, correlation, and remediation in a one-off fashion, compounding existing inter-team frustration and increasingly blurring the distinction of success.


DeepMind Co-founder: The Next Stage of Gen AI Is a Personal AI

Suleyman said Pi will do away with the popular internet model of offering a product, like Google Search or Facebook, free for users and instead rely on advertising to pay the bills. The ad-based approach does not align the interest of the tech platform with the user. “Really, the customer for Facebook and Google and the other big companies is the advertiser,” he said. “It’s not the user.” Pi will be different since it does not disseminate its APIs, which is needed for commercial uses. Instead, “you as the consumer are the only person that pays for the AI,” Suleyman said. As for the known risks of generative AI including toxic content, hallucinations and bias, Suleyman said Pi was built to avoid toxic subjects. He claims that “none of the prompt hacks work against us.” Prompt hacks, which includes asking the AI to pretend to be another persona, are geared to get around safeguards in disclosing dangerous or toxic responses. For hallucinations, Suleyman said Pi has access to real time information but admits this remains a challenge. 


What are the cyber risks from the latest Middle Eastern conflict?

One substantial difference observed between the two conflicts is a lack of cyber activity before the initial Hamas attack. Prior to Russia’s invasion of Ukraine, which had been signalled months in advance by the Russian government, Ukraine was bombarded with a widespread campaign of cyber intrusions designed to soften up critical targets in advance. This was not the case in the Gaza war, and this is not much of a surprise, because out of necessity, Hamas spent months – maybe years – plotting its initial attack with exceptional attention paid to operational security (OpSec). Indeed, it has been suggested that some senior members of Hamas were kept in the dark entirely, in case they were compromised by Israeli intelligence. Therefore, for the incursion to take Israel by complete surprise, it may have been necessary for pro-Palestinian groups and Hamas-affiliated actors to confine their activity to normal levels. According to SecurityScorecard’s intel team, this was almost certainly the case. 


Microsoft Playwright Testing: Scalable End-to-End Testing for Modern Web Apps

With the playwright/test runner, tests run in independent, parallel worker processes, with each process starting its own browser. Moreover, increasing the number of parallel workers can reduce the time it takes to complete the full test suite. However, when running tests locally or in a continuous integration (CI) pipeline, there is a limitation to the number of central processing unit (CPU) cores on a local machine or CI agent machine. ... With Microsoft Playwright Testing, developers can use the scalable parallelism provided by the service to run web app tests simultaneously across all modern rendering engines such as Chromium, WebKit, and Firefox on Windows and Linux and mobile emulation of Google Chrome for Android and Mobile Safari. In addition, the service-managed browsers ensure consistent and reliable results for functional and visual regression testing, whether tests run from a CI pipeline or development machine.



Quote for the day:

"One advantage of talking to yourself is that you know at least somebody's listening." -- Franklin P. Jones

Daily Tech Digest - October 17, 2023

Beware the cost traps that can strain precious cybersecurity budgets

Overlapping services that duplicate functions are another common overspend that can eat into security budgets. "Paying for these duplicate security functions can be financially inefficient and strain the budget," says Nick Trueman, CISO at cloud services provider Nasstar. It can also result in integration challenges whereby coordinating and integrating multiple providers with similar functions leads to complexities and interoperability issues, he adds. CISOs should conduct a comprehensive review and identify all current security providers and the services they offer. ... On the topic of redundancies, CISOs can often end up paying for tools that do not deliver the expected benefits, significantly impacting their security budgets and coverage plans. CISOs may encounter scenarios where they invest in security tools or technologies that, despite their initial promise, fail to provide the anticipated value or return on investment (ROI), says Paul Baird, chief technical security officer at Qualys. This could happen for several reasons, including inadequate integration with existing systems, limited user adoption, or the tools not effectively addressing the organization's specific security needs.


Essential cyber hygiene: Making cyber defense cost effective

When it comes to dollars and cents, the industry as a whole has made many attempts to calculate the cost of a cyber attack. The same can’t be said about estimating the costs of implementing cyber defenses. But there’s value in knowing both of those metrics. Knowing what an enterprise can spend to prevent an attack is helpful when you know what they’re willing to spend to recover from an attack. For example, if the cost of recovering from a cyber attack is $1.25 million but an enterprise can spend only $1 million on implementing a set of robust cyber defenses, which one should they choose? To estimate the cost of IG1 Safeguards, we looked at the tools that an enterprise needs to implement them. Tools are priced in many ways, the most common being the following: by number of employees, users, workstations/servers, and/or by usage (e.g., megabyte, gigabyte, hours). CIS created IG1 Enterprise Profiles to help streamline the process of calculating costs. Our estimate shows that obtaining and deploying commercially-supported versions of the tools should be less than 20% of the Information Technology (IT) budget for any size enterprise.


Why the human factor is critical to ITOps success

Communication between developer teams and “business types” can be fraught. Developers typically work very hard for long hours to deliver what customers want. Yet efforts frequently fall flat, due in no small part to a failure of one or both sides to understand or explain what the other really wants or needs, says Shafrir. “There will always be a wall between the two, but especially during this time with tonnes of services on the internet and daily changes to software. It’s a problem if only business people are in touch with customers,” he says. Developers often have little idea how the customers are using the product – not least because they write code and “throw it over the fence”. “Then it’s frustrating when we’re [developers] told our quality is very low and it’s not a good job and we don’t work hard enough and other things,” says Shafrir. Shafrir recommends that IT leaders “take down that wall” between the two teams. If developers are notified and know exactly – continuously – how the code is performing in the customer environment, fixes can be rolled out faster. 


Security Governance and Risk Management in Enterprise Architecture

Security governance isn't just a rulebook. It's a structured approach that champions data protection, system reliability, and seamless business operations. With this governance in place, the intricate realm of cybersecurity becomes a navigable terrain. True security roots itself deep within organizational culture. When every team member, from the top brass to the newest recruit, values security, the organization stands united and fortified. A collective commitment to security amplifies the organization's resilience. ... Frameworks, especially ones like the NIST Risk Management Framework, offer more than theoretical value: they shape practical decisions in technology, placing risk considerations at the forefront. Adopting such guiding principles ensures that architectural choices resonate with both innovation and security. Still, the landscape of risk is dynamic, changing with every technological advancement and emerging threat. Regular, thorough risk assessments become a beacon that illuminates potential security gaps. Allocating resources to these evaluations ensures a resilient and adaptive enterprise architecture, always prepared for the challenges ahead.


Why A One-Size-Fits-All 'Compliance' Plan Can Be Dangerous

IT departments these days use many different architectures with various hardware, software and network configurations. Because of these differences, it's difficult to create a single cybersecurity formula that works for all companies. Some of the pitfalls of trying to cut corners and save costs by implementing a generic plan include: Lack Of Customization - A one-size-fits-all approach doesn't consider the specific problems and needs of each company. What works for one organization may not be enough to address the weaknesses and particular requirements of the next. It's important to customize security measures to fit the unique characteristics of each company to effectively protect against cyber threats. Increased Risk Of Breaches - When companies use a standardized compliance plan, it sets a basic level of security. However, this plan might not take into account the specific risks and security gaps that exist in each organization. Without customized security measures, a greater chance exists of experiencing data breaches or cyberattacks.


Generative AI is everything, everywhere, all at once

Unlike generative AI, which exploded within the past year thanks in part to OpenAI's consumer-facing ChatGPT, AI is nothing new. And it's a fairly ambiguous term, Toubia explained. "There's a wide range of things you could label as AI or machine learning," he said. "There's some very simple statistical methods that have been around for over 100 years that technically could be as clever as AI." Given the enigmatic nature of generative AI, it's also a complicated product to patent, audit, or regulate, which further exacerbates AI washing. "Companies don't really have to publish or explain their AI because it's a trade secret. There's no pattern that you could read, and we don't really know what's under the hood, so to speak," Toubia said. Regulatory institutions like the FTC are certainly trying to control the unwieldy industry with industry-wide warnings and reports. While he appreciates the ideas behind the warnings, Thurai is doubtful that the FTC's stern warnings and oversights will be enforced due to how difficult it will be to prove in court.


The Whats and Hows of DevOps Talent Retention

Given that a lack of opportunities for growth accounts for close to half of DevOps employees’ turnover rates, it once again highlights the importance of a well-planned approach to support ongoing learning. By understanding employee employees’ performance, preferences and environment, a wider range of support offers can be implemented. Providing tailored assignments that allow employees to focus on their skills or passion not only helps build commitment to the job but it also acts as a motivator. This can stem from a simple 15-minute project or a year-long program. But advocating personal development courses and upskilling techniques are not enough. Employers must also harness peer recognition. A sales organization within the United States Postal Service (USPS) recently made an attempt to boost peer recognition by enabling their employees to identify behavior associated with new skills learned by setting up a simple online platform. The group oversaw an overall employee engagement rise by 8% in the initial pilot group. Such strategies were then used to improve work across the organization.


How to Partner with Law Enforcement Following a Cyberattack

Law enforcement will come with the intention of acting as partner to the victim organization, alongside other stakeholders like remediation firms and insurance companies. “We would really expect to be seen as true partners in every sense of the word,” says Alway. A law enforcement team could include investigative agents with cybersecurity backgrounds, as well as technical experts, such as computer scientists and data analysts. That partnership will be based on information sharing. Organizations will tell law enforcement about the nature of the incident, provide logs, and any other evidence of the intrusion and answer questions. Law enforcement will share their knowledge of IOCs and any information they have that can help enterprises during the remediation process. “There’s no such thing as over communication in cyber incidents,” says Alway. It is important to keep in mind that law enforcement’s job takes time. “A lot of times the investigation piece could drag on for multiple years, whereas the company [or] organization is on a shorter timeline,” says Cabrera.


Cyber security professionals say industry is “booming”

Cyber security professionals are still positive about the industry and their opportunities despite the economic climate, according to The Chartered Institute of Information Security's (CIISec) 2022/2023 State of the Profession report – the eighth annual survey of the cyber security industry. In the survey of 302 security professionals, almost 80% say they have ‘good’ or ‘excellent’ career prospects, and more than 84% say the industry is ‘growing’ or ‘booming’. Despite being protected from economic challenges, the report highlights that the industry is still plagued by issues including stress and overwork. 22% of respondents work more than the 48 hours per week mandated by the UK Government, and 8% work more than 55 hours which, according to the World Health Organisation, marks the boundary between safe and unsafe working hours. The reports also found: Worries over workload loom over cyber security professionals - When asked what keeps them awake at night, the two main sources of stress for cyber professionals are day-to-day stress/workload (identified by 50%) and suffering a cyber-attack (32%).


Are enterprise architects the new platform team leaders?

Today there is a need for platform teams to architect the connections between business processes, outcomes, and the technology. Many teams today still operate in silos which can manifest within their specific functional pieces of technology or just individual teams. However, today, there are several key factors reshaping the way teams approach their work. The easy access to technology outside of corporate IT has fundamentally changed the dynamic. In addition, the idea of IT owning a very small piece of the technology is no longer acceptable. For example, if you are a database team, you can’t just be responsible for the database itself – you must also own the delivery of that database as a service, including the additional technology around it like the OS, compute, memory, and all elements of cost, security, access, and performance. Enterprise architects in this new role as platform leaders must adopt a cross-functional mindset. They must look left and right x-functionally at the technology, how it should fit together, the services the company should offer – and for what use cases.



Quote for the day:

"A leader is always first in line during times of criticism and last in line during times of recognition." -- Orrin Woodward

Daily Tech Digest - October 16, 2023

A Holistic Approach to Cyber Resilience

Beyond investing in the right training techniques to build resilience, it is important for security leaders to set up the right culture for cybersecurity and ultimately build a strong cybersecurity foundation. To help meet today’s cybersecurity challenges, organizations should treat cybersecurity as a team sport, working with employees to adopt a collective responsibility mindset throughout the entire organization so as to not place blame or pressure on just the cybersecurity teams. To start building this collective mindset, begin including employees outside of security teams in security training to avoid the blame game when an attack inevitably happens. ... Not only does this help ease the burden security teams feel, but it also ensures that all employees know the appropriate steps to take when encountering a potential threat. By focusing on creating a culture of understanding, employees outside the security team may be more open to learning from these incidents and identifying concerns in the future, ultimately giving your organization a more holistic view of the true state of its cyber resilience.


Why IT projects still fail

Some project leaders list the prevailing do-more-with-less expectation as another reason for failed IT projects today. They say this mentality generally leads to project teams lacking the resources that they need to get the desired work done on time. “Everybody is very concerned with that bottom line, and they should be concerned about that, but the other side of that is they’re expecting a few people to do a lot of things,” Phillips says. For example, she says workers are frequently assigned to multiple projects simultaneously, and many are assigned to that project work on top of their existing duties. As a result, these workers are pulled in too many different directions. Others say enterprise leaders underestimate costs and the time required to complete the work or they fail to allocate the right talent to the team, even as project managers surface the consequences of under-allocating the money, talent, and time needed for success. Experienced project leaders say it’s crucial for IT project managers and CIOs themselves to ensure that the business sponsors and C-suite executives get the information they need to be realistic about the required resources, support, and schedules.


Making sure open source doesn’t fail AI

The biggest difficulty is in defining open source in a world where data and software are so inextricably linked. As Maffulli describes, the most intense discussions among his working group revolve around the dependencies between training data and the instructions on how to apply it. Perhaps not surprisingly, given the complexity and the stakes involved, “there is no strong consensus right now on what that means,” he says. There are at least two approaches, with two primary factions squaring off in the working group. The first tries to stick closely to the comfortable concept of source code, promoting the idea that “source code” gets one-to-one translated to the data set. In this view, the combination of the instructions on how to build the model and the binary code is the source code subject to “open source.” The second faction sees things in a radically different way, believing that you can’t modify code without having access to the original data set. In this view, you need other things to effectively exercise the fundamental freedoms of open source. 


What Are Data Governance Tools, and How Do They Work?

Data governance tools catalog data assets; they collect data from databases, files, applications and other data sources. They then tag data assets based on predefined or custom metadata attributes and classify them based on their sensitivity, importance or relevance to specific compliance regulations. Data governance software ensures that data is accurate, complete and consistent by performing data quality checks and validations. ... Data governance tools help businesses define and manage data ownership, roles and responsibilities as well as implement data security and privacy measures. They ensure data management processes meet regulatory compliance and quality standards. They also help automate the workflow and provide structure to large volumes of data. Data governance tools serve several purposes, which include data quality management to ensure data remains accurate, complete and consistent across an organization. These tools can even be used to enforce compliance with regulatory requirements, such as GDPR and HIPAA. 


Enhancing Enterprise Solutions with SOC as a Service Network Protection

Companies that outsource their SOC activities might benefit from the knowledge, use of cutting-edge technology, and risk assessment of safety professionals. Nearly seventy-one percent of SOC analysts state that they are burned out in their jobs, particularly since there are only a few among them who are in charge of the safety of the entire company. The hackers can take advantage of holes on the infrastructure of a business to gain access unauthorized authorisation or disrupt operations. The threat control and oversight services provided by SOC as a Service aid in identifying and assessing potential risks in OT with IT settings. Owing to the proactive approach, companies are able to tackle problems before they might be used on customers. The tendency to overlook is the process of regularly checking for flaws regarding network infrastructure, software, and users. These analyses also uncover present vulnerabilities and analyze the risks associated with each problem, allowing businesses to choose updates and solutions. SOC as a Service provider not only assists in identifying problems, but additionally in monitoring and resolving those flaws.


Unleashing the Power of AI and ML in Data

Businesses can leverage AI to generate data such as fake reviews and use that information to test and demo a product. This type of demo data generation helps to create a valuable and practical data product that is quick and efficient. One of the key benefits of using AI to generate mock data is that it allows businesses to test and demo data products without having to collect real data from users. ... In forecasting, ML delivers highly automated, finely granular, and more accurate predictions than manual projections. It solves the knowledge risk inherent in organizations where projections are based on “gut feel” and “years of experience.” ML can also pick up on the nuances and subtleties of multiple features playing out in parallel that are invisible to the human eye. ... AI is a powerful technology that can enhance and optimize data analysis, but it doesn’t replace the essential role of software engineers and human expertise. Great technology demands leadership, creativity, empathy, and the ability to navigate complex ecosystems and stakeholders – a uniquely human capacity.


How APAC organisations are tapping generative AI

Across the Asia-Pacific (APAC) region, organisations like GovTech and Culture Amp have been doubling down on GenAI initiatives, more so than other parts of the world. According to a recent study by Enterprise Strategy Group and TechTarget, 75% of APAC respondents plan to adopt generative AI within the next 12 months, with nearly a third already running GenAI workloads in production or are testing the technology. The enthusiasm for generative AI in APAC is also reflected in IT budgets, with over half having allocated budgets to GenAI. Among them, 39% have allocated between 5% and 20% of their IT budget to the technology. The blinding speed of GenAI uptake among APAC organisations is also reflected in the 19% of organisations that are not yet sure if GenAI is a budget item. Nevertheless, the rapid emergence of GenAI as a top IT priority is both impressive and alarming. The study shows that GenAI has become the fifth most important strategic initiative in APAC, trailing behind digital transformation, automation, cyber security, and cost-cutting, and surpassing traditional priorities like cloud and application modernisation.


CISOs and board members are finding a common language

“The C-Suite and board of directors are increasingly relying on CISOs for guidance across a sophisticated threat landscape and changing market conditions,” said Jason Lee, CISO, Splunk. “These relationships provide CISOs the opportunity to become champions who strengthen an organization’s security culture and lead teams to become more cross-collaborative and resilient. By communicating key security metrics, CISOs can also guide boards on adopting emerging technologies, such as generative AI, to help improve cyber defense management and prepare for the future.” ... In 47% of organizations surveyed, the CISOs are now reporting directly to the CEO, indicating a closer relationship with the C-Suite and their respective governing boards. Boards of directors are increasingly looking to CISOs to guide cybersecurity strategy, offering an opportunity for CISOs to articulate value and fill in communication gaps. Numerous CISOs across many industries report regular participation in board meetings, including technology (100%), government (100%), communications and media (94%), healthcare (88%) and manufacturing (86%).


Generative AI an Emerging Risk as CISOs Shift Cyber Resilience Strategies

Enterprise risk executives should start by implementing clear rules prohibiting employees from using any unapproved web applications and tools. “It’s really another instance of shadow IT, which includes any IT-related purchases, activities or uses that the IT department is unaware of and which has historically been a big problem in most organizations,” Stevens says. When employees use approved GenAI tools, the company needs rules governing what data can -- and, more importantly, cannot -- be used with the tool. “But these rules shouldn’t be limited to only GenAI tools,” she adds. “They should be in place for all tools and applications used in the organization.” These execs should partner with any key stakeholders who might use GenAI tools. Stevens says ideally, the organization has a CISO, with the infosec organization a key stakeholder for every application that accesses and stores data or lives within the company’s network and ecosystem.


How To Use Serverless Architecture

Imagine an application as being composed of two parts: the frontend, which users interact with, and the backend, which powers the frontend. In serverless architectures, this backend code runs on the infrastructure provided by the cloud service, removing the need for businesses to worry about managing physical servers. While this does simplify things significantly, it doesn’t entirely remove responsibility from the business owner or the developer. There’s still the need to ensure the security of your code, and initial setup is necessary, albeit less time-consuming than traditional server setups. Serverless architectures are also event-driven. When certain events or triggers happen (like an HTTP request or database event, for example), your application responds. While this shifts the security of the physical servers onto the cloud provider, the responsibility for securing your code still lies with you. The building blocks of serverless applications are functions—small pieces of code, each doing a specific task. 



Quote for the day:

"Thinking should become your capital asset, no matter whatever ups and downs you come across in your life." -- Dr. APJ Kalam

Daily Tech Digest - October 15, 2023

Generative AI and the legal landscape: Evolving regulations and implications

So far we’ve seen AI giants as the primary targets of several lawsuits that revolve around their use of copyrighted data to create and train their models. Recent class action lawsuits filed in the Northern District of California, including one filed on behalf of authors and another on behalf of aggrieved citizens raise allegations of copyright infringement, consumer protection and violations of data protection laws. These filings highlight the importance of responsible data handling, and may point to the need to disclose training data sources in the future. However, AI creators like OpenAI aren’t the only companies dealing with the risk presented by implementing gen AI models. When applications rely heavily on a model, there is risk that one that has been illegally trained can pollute the entire product. ... It is clear that CEOs feel pressure to embrace gen AI tools to augment productivity across their organizations. However, many companies lack a sense of organizational readiness to implement them. Uncertainty abounds while regulations are hammered out, and the first cases prepare for litigation.


Cars are a ‘privacy nightmare on wheels’

Apart from data entered directly into a car’s “infotainment” system, many cars can collect data in the background via cameras, microphones, sensors and connected phones and apps. A lot of these data are used, at least in part, for legitimate purposes such as making driving more enjoyable and safer for the driver, passengers and pedestrians. But they can also be supplemented with data collected from other sources and used for other purposes. For instance, data may be collected from your website visit, your test drive at a dealership, or from third parties including “marketing agencies” and “providers of data-collecting devices, products or systems that you use”. ... It’s safe to say car manufacturers generally don’t want privacy laws tightened. The Federal Chamber of Automotive Industries (FCAI) represents companies distributing 68 brands of various types of vehicles in Australia. During the recent review of our privacy legislation, the FCAI made a submission to the Attorney General’s department arguing against many of the privacy law reforms under consideration.


The Impact of AI and Machine Learning in HR: Enhancing Recruitment and Employee Engagement

Amid a new digital landscape, rising employee expectations, and evolving business dynamics, HR professionals contend with a slew of challenges. Adapting HR processes and systems to digital transformation, especially in organisations with legacy systems can prove demanding. HR leaders today grapple with tasks ranging from keeping up with talent acquisition in the digital world and rapidly evolving HR technology such as HRIS, AI Tools, and Data Analytics to boost employee engagement. They also have to adapt to various recruitment strategies to find the right talent in a competitive job market. While navigating these challenges, leaders should also remain vigilant of potential advantages on the horizon. These encompass enhancing the overall employee journey, embracing a variety of learning and growth initiatives, and streamlining decision-making through AI to enhance results while safeguarding efficiency. Furthermore, AI can analyze large amounts of data quickly, empowering decision-makers with useful insights to help them make informed decisions. This data-driven decision-making can result in better resource allocation, better strategy, and increased work satisfaction.


Microsoft to create team dedicated to data center automation and robotics

The move comes a month after a Microsoft Azure outage in Australia was partially blamed on poor software automation. A utility power sag tripped cooling units, shutting them down and causing temperatures to rise. With insufficient staff on site to reboot the units, the automated system shut down servers to protect them from overheating. Instead, the system could have been designed to reboot the cooling units. “We are exploring ways to improve existing automation to be more resilient to various voltage sag event types,” Microsoft said in a post-mortem. Robots and data centers have a long history, with numerous companies and research groups trying to build computers that could look after fellow computers. ... "As far as robotics, our hyperscale data centers are more like warehouses and most of the processes require a robot to navigate to a specific location to perform a task," Google's VP of data centers Joe Kava told DCD in 2021. “However, even as advanced as robotics have become, many of the tests in data centers are much more complicated than in other industries that have employed large-scale robotic implementations."


What does the DPDP Act mean for philanthropy in India?

Given the stringent requirements of the DPDP Act, there’s a pressing need for revisiting and potentially revising the CSR guidelines. Striking a balance between accountability and privacy becomes crucial in ensuring compliance with both CSR and data protection mandates. While accountability remains paramount, it’s time to transition from rigid metrics to narratives of change. By fostering relationships built on mutual respect and shared learning, practices followed by donor organisations can resonate with the ethos of the DPDP Act and nurture a more collaborative philanthropic ecosystem. This necessitates a fundamental rethinking of how social impact can be measured, and shifting the focus from data collection to storytelling and community empowerment. By upholding privacy and agency, as per Sections 6 and 12, the law provides an opening to develop more participatory and human-centred evaluation frameworks. Funders are pivotal in enabling this evolution by modifying expectations, building capacity, and championing new trust-based and collaborative models of assessing progress.


LLMs Demand Observability-Driven Development

With good observability data, you can use that same data to feed back into your evaluation system and iterate on it in production. The first step is to use this data to evaluate the representativity of your production data set, which you can derive from the quantity and diversity of use cases. You can make a surprising amount of improvements to an LLM based product without even touching any prompt engineering, simply by examining user interactions, scoring the quality of the response, and acting on the correctable errors (mainly data model mismatches and parsing/validation checks). You can fix or handle for these manually in the code, which will also give you a bunch of test cases that your corrections actually work! These tests will not verify that a particular input always yields a correct final output, but they will verify that a correctable LLM output can indeed be corrected. You can go a long way in the realm of pure software, without reaching for prompt engineering. But ultimately, the only way to improve LLM-based software is by adjusting the prompt, scoring the quality of the responses, and readjusting accordingly. 


Feds Warn Healthcare Sector of 'NoEscape' RaaS Gang Threats

The developers of NoEscape ransomware are unknown but they claim to have created their malware and associated infrastructure "entirely from scratch," HHS HC3 said. But security researchers have noted that the ransomware encryptors of NoEscape and Avaddon’s are nearly identical, with only one notable change in encryption algorithms, HHS HC3 wrote. "Previously, the Avaddon encryptor utilized AES for file encryption, with NoEscape switching to the Salsa20 algorithm. Otherwise, the encryptors are virtually identical, with the encryption logic and file formats almost identical, including a unique way of 'chunking of the RSA-encrypted blobs.'” While researchers have observed evidence suggesting that NoEscape is related to Avaddon, unlike Avaddon, it has yet to be determined if there is a free NoEscapte decryptor that organizations can utilize to recover the encrypted files, HHS HC3 said. "Until then, unless certain detection and prevention method are put in place, a successful exploitation by NoEscape ransomware will almost certainly result in the encryption and exfiltration of significant quantities of data."


5 Steps For Building Your Enterprise Semantic Recommendation Engine

After creating the supporting data models, the next step in building a semantic recommendation engine is to construct the graph. The graph acts as a database of nodes and connections between nodes (called edges) that houses all of the content relationships defined in the ontology model. Building the graph involves both ingesting and enriching source data. Ingestion maps raw data to nodes and edges in the graph. Enrichment appends additional attributes, tags, and metadata to enhance the data. This enriched data is then be transformed into semantic triples, which are subject-predicate-object structures that capture relationships. In our example, the healthcare provider could transform their enriched data into triples that capture the relationships between diagnoses and medical subjects, and medical subjects and content. Converting data into a web of semantic triples and loading it into the graph enables efficient querying. The knowledge graph’s flexibility also enables continuous integration of new data to keep recommendations relevant. 


Agile Architecture: A Comparison of TOGAF and SAFe Framework for Agile Enterprise Architecture

In TOGAF, the Enterprise Architect plays a vital role in creating, maintaining, and evolving the enterprise architecture of an organization. They are responsible for aligning business and IT strategies, processes, and systems, and ensuring that the architecture supports the organization’s goals and objectives. The Enterprise Architect in TOGAF follows the Architecture Development Method (ADM), a structured approach that guides the creation and implementation of enterprise architecture. ... In SAFe, the Enterprise Architect has a slightly different focus and responsibilities. While the core principles of enterprise architecture remain the same, the Enterprise Architect in SAFe works within the context of Agile development practices and the broader framework of SAFe. They collaborate closely with Agile teams as well as other architect roles and play a crucial role in providing technical leadership, guidance, and support. The Enterprise Architect in SAFe helps teams align their technical solutions with the overall enterprise architecture, ensuring that the architectural vision is realized, and technical debt is managed effectively.


Cyber Insecurity, AI and the Rise of the CISO

Adding to cyber insecurity is the unease in the use of artificial intelligence not only by public employees but by cyber criminals too. It comes as no surprise that artificial intelligence (AI) is being used by cyber criminals to further exploit cyber weaknesses and vulnerabilities. In PTI’s City and County AI Survey, AI was listed as the No. 1 application to help thwart cyberattacks. They recognize how AI can actively scan for suspicious patterns and anomalies as well as assist in remediation and recovery strategies. What’s more AI systems continue to learn and act. Also new this year is the renewed focus on zero trust frameworks and strategies. Zero trust has never been more critical and unfortunately it takes both time and talent to fully comprehend all its dependencies leading towards deployment. This year also saw for the first time in years the National Institute of Standards and Technology (NIST) has modified its Cybersecurity Framework to include an underlying layer of governance in each of its traditional five pillars. 



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - October 14, 2023

What is tokenization?

Tokenization is the process of issuing a digital representation of an asset on a (typically private) blockchain. These assets can include physical assets like real estate or art, financial assets like equities or bonds, nontangible assets like intellectual property, or even identity and data. Tokenization can create several types of tokens. Stablecoins, a type of cryptocurrency pegged to real-world money designed to be fungible, or replicable, are one example. Another type of token is an NFT—a nonfungible token, or a token that can’t be replicated—which is a digital proof of ownership people can buy and sell. Tokenization is potentially a big deal. Industry experts have forecast up to $5 trillion in tokenized digital-securities trade volume by 2030. There’s been hype around digital-asset tokenization for years, since its introduction back in 2017. But despite the big predictions, it hasn’t yet caught on in a meaningful way. We are seeing slow movement: US-based fintech infrastructure firm Broadridge now facilitates more than $1 trillion monthly on its distributed ledger platform.


MVP or TVP? Why Your Internal Developer Platform Needs Both

“TVP is about ‘thinness’ to try and avoid a massive platform. TVP is something that remains throughout an organizational evolution — it should always be the thinnest viable — whereas MVP is normally the first stage of something larger.” This shift toward investment in long-term thinness is extremely important. Gregor Hohpe calls this a “sinking platform” in his 2022 PlatformCon talk “The Magic of Platforms.” ... You can leave your platform the same because you invested all this kind of money, and we call this a sinking platform as the water level rises, right; it might be justified from investment, but you are kind of duplicating things that are now available in the base platform.” Hohpe goes on to describe how platform teams need to intentionally decide on their philosophy when it comes to supporting their platform: “Or you build a ‘floating platform’ where, when the base platform gains the capabilities you have built, you say ‘Oh, perfect! I don’t need my part anymore. I can let the base platform handle that, and I can innovate further on top. I build new things.'”


7 Blockchain Technology Mistakes You Should Watch Out For

The application of Blockchain for secure information exchange and storing records leads to many wrong beliefs. CIOs get confused between Data Base Management Systems (DBMS) and blockchain. The existing blockchain platforms cannot provide support for complex data models and do not provide assurance of high throughput or low latency. They were built to provide an immutable, authoritative, and trusted record of events among a dynamic assortment of unrelated stakeholders. ... Smart Contract is a code that automatically executes legally relevant events and actions that are part of the agreement. The main utility of Smart Contracts is to reduce the need for trusted intermediaries, prevent fraud and reduce arbitration costs. They are commonly associated with cryptocurrencies like Bitcoin and are fundamental building blocks of Decentralized Finance (DeFi) applications. Although, at present Smart Contracts are not necessarily an agreement that has been approved by law, with some countries being an exception.


Practicing Good Green Governance Leads to Profits

Let’s begin by defining green governance. It refers to a set of principles and practices aimed at promoting environmental sustainability and responsible management of natural resources within a clear governance and decision-making framework. A green-minded corporation should integrate environmental considerations into policies, regulations, and actions throughout all divisions of its business. Green governance aims to balance economic and environmental practices to create a profitable and sustainable future. ... Practicing green governance requires a holistic approach that considers the interconnectedness of environmental, operational, and economic systems to balance human needs and the health of the planet with the company’s bottom line and valuation. That balance is what helps ensure a sustainable and prosperous future for all stakeholders. ... Many companies want to showcase their greenness in a credible and trustworthy way but find the current system of backward-looking, voluntary standards and the myriad of ESG metrics to be daunting, arduous, and costly.


The Future is Now: IoT and the Evolution of Business Computing

The proliferation of IoT devices and sensors is generating massive amounts of data that provides invaluable insights for business decision-making. However, organizations need talent to properly analyze and derive meaning from these huge IoT datasets. A business management and accounting online degree is valuable in helping to develop the analytics skills needed to fully capitalize on IoT capabilities. These programs prepare the next generation of data-driven business leaders who will drive transformative change through IoT adoption. With access to real-time data from across the enterprise, managers can gain unprecedented visibility into operations. Marketers can analyze IoT data to understand customer behavior patterns and rapidly adjust campaigns. Supply chain personnel can identify and resolve bottlenecks as they occur. Executives can track core business metrics in real time to guide strategic decisions. The sheer volume of IoT data brings a paradigm shift in business computing where decisions are proactive, not reactive.


Psychological safety at the workplace

People show up at work with different states of mental well-being. So, empathy is absolutely non-negotiable. A meaningful way to be empathetic is to be mindful of our language and its impact on the other person. For instance, instead of the confrontational approach where one might say, “Your code is quite bad and not what I expected” say, “I know that you are capable of writing great code. Let’s figure out what happened this time.” This manner of checking in with each other on their state of mind and creating a space for team members to discuss their mental health without fear of judgment is a move in the right direction. ... Welcome different perspectives, and when people offer them, disagree with respect. People tend to cushion their ideas when they fear judgment. For instance, they might say, “this is probably a silly idea,” or “this may be a dumb question.” Reassure them that all ideas are welcome. Watch out for groupthink — the tendency of the minority to stay silent in order not to upset the majority. Invite opinions from everyone. 


The future of augmented reality is AI

Whenever we in the tech media or tech industry think or talk about AR, we tend to focus on what kind of holographic imagery we might see superimposed on the real world through our AR glasses. We imagine hands-free Pokémon Go, or radically better versions of Google Glass. But since the generative AI/LLM-based chatbot revolution struck late last year, it has become increasingly clear that of all the pieces that make up an AR experience, holographic digital virtual objects is the least important. The glasses are necessary. Android phones and iPhones have had “augmented reality” capabilities for years, and nobody cares because looking at your phone doesn’t compare to just seeing the world hands-free through glasses. The cameras and other sensors are necessary. It’s impossible to augment reality if your device has no way to perceive reality. The AI is necessary. We need AI to interpret and make sense of arbitrary people, objects, and activity in our fields of view.


How to maintain a harmonious workplace atmosphere in multigenerational firms

Ensuring the well-being of a multigenerational workforce is crucial for any organisation. HR can play a key role in this by implementing policies and programs that cater to the unique needs and preferences of different generations. For instance, offering flexible work arrangements, mentoring programs, and personalised professional development opportunities can help employees of all ages feel valued and supported. Additionally, providing access to resources and benefits that address specific health and wellness concerns can help ensure that employees stay healthy and productive throughout their careers. “By prioritising the well-being of all employees, regardless of age or background, organisations can create a more inclusive and supportive workplace environment that promotes work-life balance. Creating a diverse, equitable, and inclusive workplace is essential for fostering a positive and productive work environment. 


Oh No, the Software Consultants Are Coming!

Sadly, consultants are still used to back up a decision that has already been made by management. So a sudden presence of consultants is often viewed as positively as the arrival of sharks around a stalled boat. But in most cases, consultants are just hired to see why an area is not performing in some way. It is perfectly common for them to tell management that they are the problem. That might shorten the engagement, but you can do that sort of thing when you are not an employee. More realistically, consultants might need to explain to staff why systematic changes will improve the company’s prospects, which still leaves the unspoken threat about what happens if things don’t change. And yet, many developers do fall into ruts and moving on may truly be the best thing to do. And of course, escaping a death march project is not always the worst thing that can happen. By the way, if you are staff, always ask consultants for career advice. Not only is it free, but it won’t be biased by your background or colored by employer motives.


CBDC and stablecoins: Early coexistence on an uncertain road

It is too early to confidently forecast the trajectory and endgame for CBDCs and stablecoins, given the multitude of unresolved design factors still in play. For instance, will central banks focus first on retail or wholesale use cases, and emphasize domestic or cross-border applications? And how rapidly will national agencies pursue regulation of stablecoins prior to issuing their own CBDCs? To begin to understand some of the potential scenarios, we need to appreciate the variety and applications of CBDCs and stablecoins. There is no single CBDC issuance model, but rather a continuum of approaches being piloted in various countries. ... At the opposite end of the spectrum, China’s CBDC pilot relies on private-sector banks to distribute and maintain eCNY (digital yuan) accounts for their customers. The ECB approach under consideration involves licensed financial institutions each operating a permissioned node of the blockchain network as a conduit for distribution of a digital euro.



Quote for the day:

"Anything is possible when you have the right people there to support you." -- Misty Copeland