Daily Tech Digest - September 26, 2023

How to Future-Proof Your IT Organization

Effective future-proofing begins with strong leadership support and investments in essential technologies, such as the cloud and artificial intelligence (AI). Leaders should encourage an agile mindset across all business segments to improve processes and embrace potentially useful new technologies, says Bess Healy ... Important technology advancements frequently emerge from various expert ecosystems, utilizing the knowledge possessed by academic, entrepreneurial, and business startup organizations, Velasquez observes. “Successful IT leaders encourage team members to operate as active participants in these ecosystems, helping reveal where the business value really is while learning how new technology could play a role in their enterprises.” It’s important to educate both yourself and your teams on how technologies are evolving, says Chip Kleinheksel, a principal at business consultancy Deloitte. “Educating your organization about transformational changes while simultaneously upskilling for AI and other relevant technical skillsets, will arm team members with the correct resources and knowledge ahead of inevitable change.”


How one CSO secured his environment from generative AI risks

"We always try to stay ahead of things at Navan; it’s just the nature of our business. When the company decided to adopt this technology, as a security team we had to do a holistic risk assessment.... So I sat down with my leadership team to do that. The way my leadership team is structured is, I have a leader who runs product platform security, which is on the engineering side; then we have SecOps, which is a combination of enterprise security, DLP – detection and response; then there’s a governance, risk and compliance and trust function, and that’s responsible for risk management, compliance and all of that. "So, we sat down and did a risk assessment for every avenue of the application of this technology. ... "The way we do DLP here is it’s based on context. We don’t do blanket blocking. We always catch things and we run in it like an incident. It could be insider risk or external, then we involve legal and HR counterparts. This is part and parcel with running a security team. We’re here to identify threats and build protections against them."


Governor at Fed Cautiously Optimistic About Generative AI

The adverse impact of AI on jobs will only be borne by a small set of people, in contrast to the many workers throughout the economy who will benefit from it, she said. "When the world switched from horse-drawn transport to motor vehicles, jobs for stable hands disappeared, but jobs for auto mechanics took their place." And it goes beyond just creating and eliminating positions. Economists encourage a perception of work in terms of tasks, not jobs, Cook said. This will require humans to obtain skills to adapt themselves to the new world. "As firms rethink their product lines and how they produce their goods and services in response to technical change, the composition of the tasks that need to be performed changes. Here, the portfolio of skills that workers have to offer is crucial." AI's benefits to society will depend on how workers adapt their skills to the changing requirements, how well their companies retrain or redeploy them, and how policymakers support those that are hardest hit by these changes, she said.


6 IT rules worth breaking — and how to get away with it

Automation, particularly when incorporating artificial intelligence, presents many benefits, including enhanced productivity, efficiency, and cost savings. It should be, and usually is, a top IT priority. That is, unless an organization is dealing with a complex or novel task that requires a nuanced human touch, says Hamza Farooq, a startup founder and an adjunct professor at UCLA and Stanford. Breaking a blanket commitment to automation prioritization can be justified when tasks involve creative problem-solving, ethical considerations, or situations in which AI’s understanding of a particular activity or process may be limited. “For instance, handling delicate customer complaints that demand empathy and emotional intelligence might be better suited for human interaction,” Farooq says. While sidelining automation may, in some situations, lead to more ethical outcomes and improved customer satisfaction, there’s also a risk of hampering a key organization process. “Overreliance on manual intervention could impact scalability and efficiency in routine tasks,” Farooq warns, noting that it’s important to establish clear guidelines for identifying cases in which an automation process should be bypassed.


Introduction to Azure Infrastructure as Code

One of the core benefits of IaC is that it allows you to check infrastructure code files to source control, just like you would with software code. This means that you can version and manage your infrastructure code just like any other codebase, which is important for ensuring consistency and enabling collaboration among team members. In early project work, IaC allows for quick iteration on potential configuration options through automated deployments instead of a manual "hunt and peck" approach. Templates can be parameterized to reuse code assets, making deploying repeatable environments such as dev, test and production easy. During the lifecycle of a system, IaC serves as an effective change-control mechanism. All changes to the infrastructure are first reflected in the code, which is then checked in as files in source control. The changes are then applied to each environment based on current CI/CD processes and pipelines, ensuring consistency and reducing the risk of human error.


National Cybersecurity Strategy: What Businesses Need to Know

Defending critical infrastructure, including systems and assets, is vital for national security, public safety, and economic prosperity. The NCS will standardize cybersecurity standards for critical infrastructure—for example, mandatory penetration tests and formal vulnerability scans—and make it easier to report cybersecurity incidents and breaches. ... Once the national infrastructure is protected and secured, the NCS will go bullish in efforts to neutralize threat actors that can compromise the cyber economy. This effort will rely upon global cooperation and intelligence-sharing to deal with rampant cybersecurity campaigns and lend support to businesses by using national resources to tactically disrupt adversaries. ... As the world’s largest economy, the U.S. has sufficient resources to lead the charge in future-proofing cybersecurity and driving confidence and resilience in the software sector. The goal is to make it possible for private firms to trust the ecosystem, build innovative systems, ensure minimal damage, and provide stability to the market during catastrophic events.


Preparing for the post-quantum cryptography environment today

"Post-quantum cryptography is about proactively developing and building capabilities to secure critical information and systems from being compromised through the use of quantum computers," Rob Joyce, Director of NSA Cybersecurity, writes in the guide. "The transition to a secured quantum computing era is a long-term intensive community effort that will require extensive collaboration between government and industry. The key is to be on this journey today and not wait until the last minute." This perfectly aligns with Baloo's thinking that now is the time to engage, and not to wait until it becomes an urgent situation. The guide notes how the first set of post-quantum cryptographic (PQC) standards will be released in early 2024 "to protect against future, potentially adversarial, cryptanalytically-relevant quantum computer (CRQC) capabilities. A CRQC would have the potential to break public-key systems (sometimes referred to as asymmetric cryptography) that are used to protect information systems today."


Future of payments technology

Embedded finance requires technology to build into products and services the capability to move money in certain circumstances, such as paying a toll on a motorway. The idea is to embed finance into the consumer journey where they don’t have to actually pay but based on a contract or agreement in advance. Consumers pay without consciously having to dig out their debit card. One example is Uber, where we widely use the service without having to make an actual payment upfront. Sometimes referred to “contextual payments” – where the context of the situation allows for payment to be frictionlessly executed. ... Artificial intelligence is already being used in payments to improve the customer journey and also how products are delivered. So far, this has been machine learning. Generative AI, where the AI itself is able to make decisions, will be the next generational jump and have a huge impact on payments, especially when it comes to protection against fraud. The problem is that artificial intelligence could be a positive or a negative, depending on who gets to exploiting it first, for good or will. 


Hiring revolutionised: Tackling skill demands with agile recruitment

Tech-enabled smart assessment frameworks not only provide scalability and objectivity in talent assessment but also help build a perception of fairness amongst candidates and internal stakeholders. L&T uses virtual assessments at the entry level, and Venkat believes in its tremendous scope for mid-level and leadership assessments too. Apurva shared that when infusing technology, many companies make the mistake of merely making things fancy without actually creating a winning EVP. The key to tech success is balancing personalised training with broader skill requirements. HR must develop a very good funnel by inculcating thought leadership around the quality of employees and must also focus on how these prospective employees absorb the culture of the organisation. This is a huge change exercise that entails identifying the skill gap, restructuring the job responsibilities, mapping specific roles with specific skills, assessing a person’s personality traits, and offering a very personalised onboarding so that people are productive when they join from day one. 


Designing Databases for Distributed Systems

As the name suggests, this pattern proposes that each microservices manages its own data. This implies that no other microservices can directly access or manipulate the data managed by another microservice. Any exchange or manipulation of data can be done only by using a set of well-defined APIs. The figure below shows an example of a database-per-service pattern. At face value, this pattern seems quite simple. It can be implemented relatively easily when we are starting with a brand-new application. However, when we are migrating an existing monolithic application to a microservices architecture, the demarcation between services is not so clear. ... In the command query responsibility segregation (CQRS) pattern, an application listens to domain events from other microservices and updates a separate database for supporting views and queries. We can then serve complex aggregation queries from this separate database while optimizing the performance and scaling it up as needed.



Quote for the day:

"Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through." -- Jarod Kint z

Daily Tech Digest - September 25, 2023

Computer vision's next breakthrough

Beyond quality and efficiency, computer vision can help improve worker safety and reduce accidents on the factory floor and other job sites. According to the US Bureau of Labor Statistics, there were nearly 400,000 injuries and illnesses in the manufacturing sector in 2021. “Computer vision enhances worker safety and security in connected facilities by continuously identifying potential risks and threats to employees faster and more efficiently than via human oversight,” says Yashar Behzadi, CEO and founder of Synthesis AI. “For computer vision to achieve this accurately and reliably, the machine learning models are trained on massive amounts of data, and in these particular use cases, the unstructured data often comes to the ML engineer raw and unlabeled.” Using synthetic data is also important for safety-related use cases, as manufacturers are less likely to have images highlighting the underlying safety factors. “Technologies like synthetic data alleviate the strain on ML engineers by providing accurately labeled, high-quality data that can account for edge cases that save time, money, and the headache inaccurate data causes,” adds Behzadi.


Five years on: the legacy of GDPR

Five years on, “the European regulation has inspired data protection around the world and many countries have put privacy standards in place. These include countries in South America such as Argentina, Brazil, and Chile, and in Asia, such as Japan and South Korea. In Australia, the Privacy Act has been in place since 1988, but was recently amended to mirror GDPR concepts. GDPR has also had a strong influence in the US where several states introduced data protection legislation, including California with the California Consumer Privacy Act, and Colorado with the Colorado Consumer Protection Act. On a federal level, the draft American Data Privacy and Protection Act is another example of where regulation is heading.” So what impact has it had on how organisations are run and data is handled? Aditya Fotedar, CIO at Tintri, a provider of auto adaptive, workload intelligent platforms, explains that while GDPR has ushered in significant changes, they are built upon existing regulations: “GDPR was a progression on the existing EU privacy laws, main changes being the sub processor contractual clauses, right to forget, and size of the fines. 


Embracing Privacy by Design as a Corporate Responsibility

Companies are increasingly realizing the immense importance of a paradigm shift towards Privacy by Design. This is because this approach significantly reduces the cost of adapting to new legislation, builds consumer trust, and carries fewer risks. Data protection is here to stay, and this is a realization that everyone – from companies to legislators to consumers – is becoming more and more aware of and acting upon. The important thing now is to approach data protection more proactively – and to make it a general corporate responsibility. Data protection rights are also human rights! So far, the advertising industry has viewed data protection as a drag, but this perception will have to change as we move through2023. After all, data protection is no longer a limitation, but a selling point. As a result, industry players are beginning to view it as a worthwhile investment rather than a cost. Companies are doing this proactively because they want to stay competitive and keep their brand privacy-centric, and to ensure that customers continue to trust them. 


4 reasons cloud data repatriation is happening in storage

Moving storage to another location means disconnecting on-site storage resources, such as SANs, NAS devices, RAID equipment, optical storage and other technologies. But how likely is it that an IT department making a push to cloud storage clears out the storage section of its data center and makes constructive use of the newly empty space? Not always likely, and the organization is still paying for every square foot of floor space in that data center. Assuming IT managers performed a careful, phased migration from on site to the cloud, they probably would have analyzed the use of space made available from the migration. If the company owns the displaced storage assets, managers must consider what happens to them after a department or application moves out of the data center. From a business perspective, it may make sense to retain these assets and have them ready for use in an emergency. This approach also ensures that storage resources are available if cloud data repatriation occurs, but it doesn't save space -- or money. Continual advances in computing power can mean that repatriation may not require as much physical space for the same or greater processing speeds and storage capacity.


10 digital transformation questions every CIO must answer

Am I engaging people on the front lines to formulate DX plans? According to Rogers, the answer should be yes. “You need people on the front lines, because it is the business units who have people out there talking to customers every day,” he says, adding that while C-suite support for transformation is crucial, the front-line perspectives offered by lower-tier employees are those that can identify where change is needed and can truly impact the business. ... Am I identifying and using the right business metrics to measure progress? Most CIOs have moved beyond using traditional IT metrics like uptime and application availability to determine whether a tech-driven initiative is successful. Still, there’s no guarantee that CIOs use the most appropriate metrics for measuring progress on a DX program, says Venu Lambu, CEO of Randstad Digital, a digital enablement partner. “It’s important to have the technology KPIs linked to business outcomes,” he explains. If your business wants to have faster time to market, improved customer engagement, or increased customer retention, those are what CIOs should measure to determine success.


Unlocking the Value of Cloud Services in the AI/ML Era

As cloud complexity and maturity grow, the goal for businesses should be more than just “lift and shift’’ scenarios, especially when such migrations can result in higher costs. The key is understanding how to unlock the real value of cloud services to meet specific organizational needs. For example, with a clear view of how a vendor’s PaaS and SaaS strengths map to business objectives, organizations can release new features, cut costs, and gain powerful new capabilities to support long-term outcomes using predefined ML models. Success demands that systems be continually evaluated to seek out iterative improvements not be considered a one-off implementation. After all, technology is constantly evolving so there’s no room to be complacent or ignore the environment in which infrastructure operates. This is where human insight and expertise play a crucial role. For example, consider the matter of determining the right public or private cloud vendors for the business. Companies operating in highly regulated regions will need to consider how a cloud vendor can ensure data is compliant to localized regulations.


Insights from launching a developer-led bank

Traditional banks tend to treat policies as their primary tool for problem-solving. While policies are part of the source code that defines how a business operates, they do not define culture. An organisation’s real culture is found in the values and behaviours of the people who work there - how they interact, how they work towards their goals, and how they handle challenges. Culture is defined by who a company chooses to hire, fire, and promote. ... Unfortunately, traditional banks don't place much emphasis on core values and culture during hiring, preferring to focus solely on qualifications and experience. This is why many banks end up with a culture that is at odds with the one they claim to have - which is both misleading to the outside world and a source of strain and cognitive dissonance internally. Your focus should be on building a culture that goes beyond policy documents. You need a holistic recruitment strategy that assesses the candidate’s core values—how they work with others, their perception of accountability, and whether they display kindness and thoughtfulness. 


How global enterprises navigate the complex world of data privacy

Some of the strategies for balancing the need for personalized data analytics against ethical and legal data privacy responsibilities include:Data minimization: As per the previous response, avoid collecting excessive data that could pose a privacy risk and only collect and use that which is specific to the business objective. Transparency: Be transparent in your policies about what is collected, how it’s collected and how it will be used. Ensure explicit consent from your end users. Strong data governance: Ensure strong oversight in areas not only such as data security, but also privacy by design, customer education, audits and reviews to enable data privacy posture to constantly evolve. The balance between customer analytics and privacy is a delicate one that requires an ongoing commitment to fostering a culture of privacy and respect for data and end users within your organization. ... As AI and machine learning technologies continue to evolve, the challenges include ethical, considerations, bias and legal compliance to name a few but the opportunities are also significant. 


Unmasking the MGM Resorts Cyber Attack: Why Identity-Based Authentication is the Future

As seen from the MGM cyber attack, relying on single-factor authentication is a glaring example of outdated security. This method must be revised today when cyber threats are increasingly sophisticated. Although a step in the right direction, multi-factor authentication can fall short if not implemented correctly. For instance, using easily accessible information as a second factor, like a text message sent to a phone, can be intercepted and exploited. The evolution of security measures has brought us from simple passwords to biometrics and beyond. Yet, many businesses are stuck in the past, relying on these half-measures. It’s not just about keeping up with the times; it’s about safeguarding your organization’s future. One-size-fits-all solutions are ineffective, and risk-based authentication should be the norm, not the exception. ... Security half-measures, like using codes, devices, or unverified biometrics as identity proxies, are more than just weak points; they open doors for cybercriminals. The MGM breach is a stark reminder of the dangers of compromised security. 


Metrics-Driven Developer Productivity Engineering at Spotify

An engineering department could have an OKR on the lagging metric of MTTR and a platform team supporting SREs would have a leading metric of log ingestion speed. These would both be in support of the company-level OKR to increase customer satisfaction, which is measured by things like net promoter scores (NPS), active users and churn rate. This emphasizes one of the important goals of platform engineering which is to increase engineers’ sense of purpose by connecting their work more closely to delivering business value. “Productivity cannot be measured easily. And certainly not with a single accurate number. And probably not even with a few of them. So these metrics about SRE efficiency or developer productivity, they need to be contextualized for your own company, your tech stack, your team even,” he said, emphasizing that the trends are typically more important than the actual values. “That does not mean that we cannot have a productive conversation about them. But it does mean there is no absolute way to measure” developer productivity, knowing that proxy metrics will never capture everything.



Quote for the day:

''A good plan executed today is better than a perfect plan executed tomorrow.'' -- General George Patton

Daily Tech Digest - September 24, 2023

How legacy systems are threatening security in mergers & acquisitions

Legacy systems are far more likely to get hacked. This is especially true for companies that become involved in private equity transactions, such as mergers, acquisitions, and divestitures. These transactions often result in IT system changes and large movements of data and financial capital which leave organizations acutely vulnerable. With details of these transactions being publicized or publicly accessible, threat actors can specifically target companies likely to be involved in such deals. We have seen two primary trends throughout 2023: Threat groups are closely following news cycles, enabling them to quickly target entire portfolios with zero-day attacks designed to upend aging technologies; disrupting businesses and their supply chains; Corporate espionage cases are also on the rise as threat actors embrace longer dwell times and employ greater calculation in methods of monetizing attacks. Together, this means the number of strategically calculated attacks — which are more insidious than hasty smash-and-grabs — are on the rise. 


How Frontend Devs Can Take Technical Debt out of Code

To combat technical debt, developers — even frontend developers — must see their work as a part of a greater whole, rather than in isolation, Purighalla advised. “It is important for developers to think about what they are programming as a part of a larger system, rather than just that particular part,” he said. “There’s an engineering principle, ‘Excessive focus on perfection of art compromises the integrity of the whole.’” That means developers have to think like full-stack developers, even if they’re not actually full-stack developers. For the frontend, that specifically means understanding the data that underlies your site or web application, Purighalla explained. “The system starts with obviously the frontend, which end users touch and feel, and interface with the application through, and then that talks to maybe an orchestration layer of some sort, of APIs, which then talks to a backend infrastructure, which then talks to maybe a database,” he said. “That orchestration and the frontend has to be done very, very carefully.” Frontend developers should take responsibility for the data their applications rely on, he said.


Digital Innovation: Getting the Architecture Foundations Right

While the benefits of modernization are clear, companies don’t need to be cutting edge everywhere, but they do need to apply the appropriate architectural patterns to the appropriate business processes. For example, Amazon Prime recently moved away from a microservices-based architecture for streaming media. In considering the additional complexity of service-oriented architectures, the company decided that a "modular monolith” would deliver most of the benefits for much less cost. Companies that make a successful transition to modern enterprise architectures get a few things right. ... Enterprise technology architecture isn’t something that most business leaders have had to think about, but they can’t afford to ignore it any longer. Together with the leaders of the technology function, they need to ask whether they have the right architecture to help them succeed. Building a modern architecture requires ongoing experimentation and a commitment to investment over the long term.


GenAI isn’t just eating software, it’s dining on the future of work

As we step into this transformative era, the concept of “no-collar jobs” takes center stage. Paul introduced this idea in his book “Human + Machine,” where new roles are expected to emerge that don’t fit into the traditional white-collar or blue-collar jobs; instead, it’s giving rise to what he called ‘no-collar jobs.’ These roles defy conventional categories, relying increasingly on digital technologies, AI, and automation to enhance human capabilities. In this emergence of new roles, the only threat is to those “who don’t learn to use the new tools, approaches and technologies in their work.” While this new future involves a transformation of tasks and roles, it does not necessitate jobs disappearing. ... Just as AI has become an integral part of enterprise software today, GenAI will follow suit. In the coming year, we can expect to see established software companies integrating GenAI capabilities into their products. “It will become more common for companies to use generative AI capabilities like Microsoft Dynamics Copilot, Einstein GPT from Salesforce or, GenAI capabilities from ServiceNow or other capabilities that will become natural in how they do things.”


The components of a data mesh architecture

In a monolithic data management approach, technology drives ownership. A single data engineering team typically owns all the data storage, pipelines, testing, and analytics for multiple teams—such as Finance, Sales, etc. In a data mesh architecture, business function drives ownership. The data engineering team still owns a centralized data platform that offers services such as storage, ingestion, analytics, security, and governance. But teams such as Finance and Sales would each own their data and its full lifecycle (e.g. making code changes and maintaining code in production). Moving to a data mesh architecture brings numerous benefits:It removes roadblocks to innovation by creating a self-service model for teams to create new data products: It democratizes data while retaining centralized governance and security controls; It decreases data project development cycles, saving money and time that can be driven back into the business. Because it’s evolved from previous approaches to data management, data mesh uses many of the same tools and systems that monolithic approaches use, yet exposes these tools in a self-service model combining agility, team ownership, and organizational oversight.


Six major trends in data engineering

Some modern data warehouse solutions, including Snowflake, allow data providers to seamlessly share data with users by making it available as a feed. This does away with the need for pipelines, as live data is shared in real-time without having to move the data. In this scenario, providers do not have to create APIs or FTPs to share data and there is no need for consumers to create data pipelines to import it. This is especially useful for activities such as data monetisation or company mergers, as well as for sectors such as the supply chain. ... Organisations that use data lakes to store large sets of structured and semi-structured data are now tending to create traditional data warehouses on top of them, thus generating more value. Known as a data lakehouse, this single platform combines the benefits of data lakes and warehouses. It is able to store unstructured data while providing the functionality of a data warehouse, to create a strategic data storage/management system. In addition to providing a data structure optimised for reporting, the data lakehouse provides a governance and administration layer and captures specific domain-related business rules.


From legacy to leading: Embracing digital transformation for future-proof growth

Digital transformation without a clear vision and roadmap is identified as a big reason for failure. Several businesses may adopt change because of emerging trends and rapid innovation without evaluating their existing systems or business requirements. To avoid such failure, every tech leader must develop a clear vision, and comprehensive roadmap aligned with organizational goals, ensuring each step of the transformation contributes to the overarching vision. ... The rapid pace of technological change often outpaces the availability of skilled professionals. In the meantime, tech leaders may struggle to find individuals with the right expertise to drive the transformation forward. To address this, businesses should focus on strategic upskilling using IT value propositions and hiring business-minded technologists. Furthermore, investing in individual workforce development can bridge this gap effectively. ... Many organizations grapple with legacy systems and outdated infrastructure that may not seamlessly integrate with modern digital solutions. 


7 Software Testing Best Practices You Should Be Talking About

What sets the very best testers apart from the pack is that they never lose sight of why they’re conducting testing in the first place, and that means putting user interest first. These testers understand that testing best practices aren’t necessarily things to check off a list, but rather steps to take to help deliver a better end product to users. The very best testers never lose sight of why they’re conducting testing in the first place, and that means putting user interest first. To become such a tester, you need to always consider software from the user’s perspective and take into account how the software needs to work in order to deliver on the promise of helping users do something better, faster and easier in their daily lives. ... In order to keep an eye on the bigger picture and test with the user experience in mind, you need to ask questions and lots of them. Testers have a reputation for asking questions, and it often comes across as them trying to prove something, but there’s actually an important reason why the best testers ask so many questions.


Why Data Mesh vs. Data Lake Is a Broader Conversation

Most businesses with large volumes of data use a data lake as their central repository to store and manage data from multiple sources. However, the growing volume and varied nature of data in data lakes makes data management challenging, particularly for businesses operating with various domains. This is where a data mesh approach can tie in to your data management efforts. The data mesh is a microservice, distributed approach to data management whereby extensive organizational data is split into smaller, multiple domains and managed by domain experts. The value provided by implementing a data mesh for your organization includes simpler management and faster access to your domain data. By building a data ecosystem that implements a data lake with data mesh thinking in mind, you can grant every domain operating within your business its product-specific data lake. This product-specific data lake helps provide cost-effective and scalable storage for housing your data and serving your needs. Additionally, with proper management by domain experts like data product owners and engineers, your business can serve independent but interoperable data products.


The Hidden Costs of Legacy Technology

Maintaining legacy tech can prove to be every bit as expensive as a digital upgrade. This is because IT staff have to spend time and money to keep the obsolete software functioning. This wastes valuable staff hours that could be channeled into improving products, services, or company systems. A report from Dell estimates that organizations currently allocate 60-80% of their IT budget to maintaining existing on-site hardware and legacy apps, which leaves only 20-40% of the budget for everything else. ...  No company can defer upgrading its tech indefinitely: sooner or later, the business will fail as its rivals outpace it. Despite this urgency, many business leaders mistakenly believe that they can afford to defer their tech improvements and rely on dated systems in the meantime. However, this is a misapprehension and can lead to ‘technical debt.’ ‘Technical debt' describes the phenomenon in which the use of legacy systems defers short-term costs in favor of long-term losses that are incurred when reworking the systems later on. 



Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas

Daily Tech Digest - September 23, 2023

A CISO’s First 90 Days: The Ultimate Action Plan and Advice

It’s a CISOs responsibility to establish a solid security foundation as rapidly as possible, and there are many mistakes that can be made along the way. This is why the first 90 days are the most important for new CISOs. Without a clear pathway to success in the early months, CISOs can lose confidence in their ability as change agents and put their entire organization at risk of data theft and financial loss. No pressure! Here’s our recommended roadmap for CISOs in the first 90 days of a new role. ... This means they can reduce the feeling of overwhelm and work strategically toward business goals. For a new CISO, it can be challenging trying to locate and classify all the sensitive data across an organization, not to mention ensuring that it’s also safe from a variety of threats. Data protection technology is often focused on perimeters and endpoints, giving internal bad actors the perfect opportunity to slip through any security gaps in files, folders, and devices. For large organizations, it’s practically impossible to audit data activity at scale without a robust DSPM.


There’s No Value in Observability Bloat. Let’s Focus on the Essentials

Telemetry data gathered from the distributed components of modern cloud architectures needs to be centralized and correlated for engineers to gain a complete picture of their environments. Engineers need a solution with critical capabilities such as dashboarding, querying and alerting, and AI-based analysis and response, and they need the operation and management of the solution to be streamlined. What’s important for them to know is that it’s not necessary to spend more to ensure peak performance and visibility as their environmental complexity grows. ... No doubt, more data is being generated, but most of it is not relevant or valuable to an organization. Observability can be optimized to bring greater value to customers, and that’s where the market is headed. Call it “essential observability.” It’s a disruptive vision to propose a re-architected approach to observability, but what engineers need is a new approach making it easier to surface insights from their telemetry data while deprioritizing low-value data. Costs can be reduced by consuming only the data that enables teams to maintain performance and drive smart business decisions.


Shedding Light on Dark Patterns in FinTech: Impact of DPDP Act

In practice, these patterns exploit human psychology and trick people into making unwanted choices/ purchases. It has become a menace for the FinTech industry. These patterns are used to encourage people to sign up for loans, credit cards, and other financial products that they may not need or understand. However, the new Digital Personal Data Protection Act, 2023 (“DPDP Act”), can be used to bring such dark patterns under control. The DPDP Act requires online platforms to seek consent of Data Principals through clear, specific and unambiguous notice before processing any data. Further, the Act empowers individuals to retract/ withdraw consent to any agreement at any juncture.  ... Companies will need to review their user interfaces and remove any dark patterns that they are using and protect the personal data and use the data for ‘legitimate purposes’ only and take consent from users, through clear affirmative action, in unambiguous terms. They will also need to develop new ways to promote their products and services without relying on deception.


Can business trust ChatGPT?

It might seem premature to worry about trust when there is already so much interest in the opportunities Gen AI can offer. However, it needs to be recognized that there’s also an opportunity cost — inaccuracy and misuse could be disastrous in ways organizations can’t easily anticipate. Up until now, digital technology has been traditionally viewed as being trustworthy in the sense that it is seen as being deterministic. Like an Excel formula, it will be executed in the same manner 100% percent of the time, leading to a predictable, consistent outcome. Even when the outcome yields an error — due to implementation issues, changes in the context in which it has been deployed, or even bugs and faults — there is nevertheless a sense that technology should work in a certain way. In the case of Gen AI, however, things are different; even the most optimistic hype acknowledges that it can be unpredictable, and its output is often unexpected. Trust in consistency seems to be less important than excitement at the sheer range of possibilities Gen AI can deliver, seemingly in an instant.


A Few Best Practices for Design and Implementation of Microservices

The first step is to define the microservices architecture. It has to be established how the services will interact with each other before a company attempts to optimise their implementation. Once microservices architecture gets going, we must be able to optimise the increase in speed. It is better to start with a few coarse-grained but self-contained services. Fine graining can happen as the implementation matures over time. The developers, operations team, and testing fraternity may have extensive experience in monoliths, but a microservices-based system is a new reality; hence, they need time to cope with this new shift. Do not discard the monolithic application immediately. Instead, have it co-exist with the new microservices, and iteratively deprecate similar functionalities in the monolithic application. This is not easy and requires a significant investment in people and processes to get started. As with any technology, it is always better to avoid the big bang approach, and identify ways to get the toes wet before diving in head first.


Bridging Silos and Overcoming Collaboration Antipatterns in Multidisciplinary Organisations

Collaboration is at the heart of teamwork. Many modern organisations set up teams to be cross-functional or multidisciplinary. Multidisciplinary teams are made up of specialists from different disciples collaborating together daily towards a shared outcome. They have the roles needed to design, plan, deliver, deploy and iterate a product or service. Modern approaches and frameworks often focus on increasing flow and reducing blockers, and one way to do this is to remove the barrier between functions. However, as organisations grow in size and complexity, they look for different ways of working together, and some of these create collaboration anti-patterns. Three of the most common antipatterns I see and have named here are: One person split across multiple teams; Product vs. engineering wars; and X-led organisations,


The Rise of the Malicious App

Threat actors have changed the playing field with the introduction of malicious apps. These applications add nothing of value to the hub app. They are designed to connect to a SaaS application and perform unauthorized activities with the data contained within. When these apps connect to the core SaaS stack, they request certain scopes and permissions. These permissions then allow the app the ability to read, update, create, and delete content. Malicious applications may be new to the SaaS world, but it's something we've already seen in mobile. Threat actors would create a simple flashlight app, for example, that could be downloaded through the app store. Once downloaded, these minimalistic apps would ask for absurd permission sets and then data-mine the phone. ... Threat actors are using sophisticated phishing attacks to connect malicious applications to core SaaS applications. In some instances, employees are led to a legitimate-looking site, where they have the opportunity to connect an app to their SaaS. In other instances, a typo or slightly misspelled brand name could land an employee on a malicious application's site. 


What Is GreenOps? Putting a Sustainable Focus on FinOps

If the future of cloud sustainability appears bleak, Arora advises looking to examples of other tech advancements and the curve of their development, where early adopters led the way and then the main curve eventually followed. “The same thing happened with electric cars,” Arora points out. “They didn’t enter the mainstream because they were better for the environment; they entered the mainstream because the cost came down.” And this is what he predicts will happen with cloud sustainability. Right now, the early adopters are stepping forward and championing GreenOps as a part of the FinOps equation. In a few years, others will be able to measure their data, analyze how they reduced their carbon impact and what effect it had on cloud spending and savings, and then follow their lead. It’s naive to think that most companies will go out of their way (and perhaps even increase their cloud spending) to reduce their carbon footprint. 


The Growing Importance of AI Governance

As AI systems become more powerful and complex, businesses and regulatory agencies face two formidable obstacles:The complexity of the systems requires rule-making by technologists rather than politicians, bureaucrats, and judges. The thorniest issues in AI governance involve value-based decisions rather than purely technical ones. An approach based on regulatory markets has been proposed that attempts to bridge the divide between government regulators who lack the required technical acumen and technologists in the private sector whose actions may be undemocratic. The technique adopts an outcome-based approach to regulation in place of the traditional reliance on prescriptive command-and-control rules. AI governance under this model would rely on licensed private regulators charged with ensuring AI systems comply with outcomes specified by governments, such as preventing fraudulent transactions and blocking illegal content. The private regulators would also be responsible for the safe use of autonomous vehicles, use of unbiased hiring practices, and identification of organizations that fail to comply with the outcome-based regulations.


Legal Issues for Data Professionals

Lawyers identify risks data professionals may not know they have. Moreover, because data is a new field of law, lawyers need to be innovative in creating legal structures in contracts to allow two or more parties to achieve their goals. For example, there are significant challenges attempting to apply the legal techniques traditionally used with other classes of business assets (such as intellectual property, real property, and corporate physical assets) to data as a business asset class. Because the old legal techniques do not fit well, lawyers and their clients need to develop new ways of handling the business and legal issues that arise, and in so doing, invent new legal structures that meet the specific attributes of data that differentiate data from other business assets. To take one example, using software agreements as a template for data transactions will not always work because the IP rights for software do not align with data, the concept of software deliverables and acceptance testing is not a good fit, and the representations and warranties are both over and underinclusive. 



Quote for the day:

"Rarely have I seen a situation where doing less than the other guy is a good strategy." -- Jimmy Spithill

Daily Tech Digest - September 22, 2023

HR Leaders’ strategies for elevating employee engagement in global organisations

In the age of AI, HR technologies have emerged as powerful tools for enhancing employee engagement by streamlining HR processes, improving communication, and personalising the employee experience. Sreedhara added “By embracing HR Tech, we can enhance the employee experience by reducing administrative burdens, improving access to information, and enabling employees to focus on more meaningful aspects of their work. Moreover, these technologies can contribute to greater employee engagement. Enhancing employee experience via HR tech and tools can improve efficiency, and empower employees to take more control of their work-related tasks. We have also enabled some self-service technologies like: Employee portal that serves all HR-related tasks, and access to policies and processes across the employee life cycle - Onboarding, performance management, benefits enrolment, and expense management;  Employee feedback and surveys; Databank for predictive analysis (early warning systems) and manage employee engagement.”


Bolstering enterprise LLMs with machine learning operations foundations

Risk mitigation is paramount throughout the entire lifecycle of the model. Observability, logging, and tracing are core components of MLOps processes, which help monitor models for accuracy, performance, data quality, and drift after their release. This is critical for LLMs too, but there are additional infrastructure layers to consider. LLMs can “hallucinate,” where they occasionally output false knowledge. Organizations need proper guardrails—controls that enforce a specific format or policy—to ensure LLMs in production return acceptable responses. Traditional ML models rely on quantitative, statistical approaches to apply root cause analyses to model inaccuracy and drift in production. With LLMs, this is more subjective: it may involve running a qualitative scoring of the LLM’s outputs, then running it against an API with pre-set guardrails to ensure an acceptable answer. Governance of enterprise LLMs will be both an art and science, and many organizations are still understanding how to codify them into actionable risk thresholds. 


Reimagining Application Development with AI: A New Paradigm

AI-assisted pair programming is a collaborative coding approach where an AI system — like GitHub Copilot or TestPilot — assists developers during coding. It’s an increasingly common approach that significantly impacts developer productivity. In fact, GitHub Copilot is now behind an average of 46 percent of developers’ code and users are seeing 55 percent faster task completion on average. For new software developers, or those interested in learning new skills, AI-assisted pair programming are training wheels for coding. With the benefits of code snippet suggestions, developers can avoid struggling with beginner pitfalls like language syntax. Tools like ChatGPT can act as a personal, on-demand tutor — answering questions, generating code samples, and explaining complex code syntax and logic. These tools dramatically speed the learning process and help developers gain confidence in their coding abilities. Building applications with AI tools hastens development and provides more robust code. 


Don't Let AI Frenzy Lead to Overlooking Security Risks

"Everybody is talking about prompt injection or backporting models because it is so cool and hot. But most people are still struggling with the basics when it comes to security, and these basics continue to be wrong," said John Stone - whose title at Google Cloud is "chaos coordinator" - while speaking at Information Security Media Group's London Cybersecurity Summit. Successful AI implementation requires a secure foundation, meaning that firms should focus on remediating vulnerabilities in the supply chain, source code, and larger IT infrastructure, Stone said. "There are always new things to think about. But the older security risks are still going to happen. You still have infrastructure. You still have your software supply chain and source code to think about." Andy Chakraborty, head of technology platforms at Santander U.K., told the audience that highly regulated sectors such as banking and finance must especially exercise caution when deploying AI solutions that are trained on public data sets.


The second coming of Microsoft's do-it-all laptop is more functional than ever

Microsoft's Surface Laptop Studio 2 is really unlike any other laptop on the market right now. The screen is held up by a tiltable hinge that lets it switch from what I'll call "regular laptop mode" to stage mode (the display is angled like the image above) to studio mode (the display is laid flat, screen-side up, like a tablet). The closest thing I can think of is, well, the previous Laptop Studio model, which fields the same shape-shifting form factor. But after today, if you're the customer for Microsoft's screen-tilting Surface device, then your eyes will be all over the latest model, not the old. That's a good deal, because, unlike the predecessor, the new Surface Laptop Studio 2 features an improved 13th Gen Intel Core H-class processor, NVIDIA's latest RTX 4050/4060 GPUs, and an Intel NPU on Windows for video calling optimizations (which never hurts to have). Every Microsoft expert on the demo floor made it clear to me that gaming and content creation workflows are still the focus of the Studio laptop, so the changes under the hood make sense.


Why more security doesn’t mean more effective compliance

Worse, the more tools there are to manage, the harder it might be to prove compliance with an evolving patchwork of global cybersecurity rules and regulations. That’s especially true of legislation like DORA, which focuses less on prescriptive technology controls and more on providing evidence of why policies were put in place, how they’re evolving, and how organizations can prove they’re delivering the intended outcomes. In fact, it explicitly states that security and IT tools must be continuously monitored and controlled to minimize risk. This is a challenge when organizations rely on manual evidence gathering. Panaseer research reveals that while 82% are confident they’re able to meet compliance deadlines, 49% mostly or solely rely on manual, point-in-time audits. This simply isn’t sustainable for IT teams, given the number of security controls they must manage, the volume of data they generate, and continuous, risk-based compliance requirements. They need a more automated way to continuously measure and evidence KPIs and metrics across all security controls.


EU Chips Act comes into force to ensure supply chain resilience

“With the entry into force today of the European Chips Act, Europe takes a decisive step forward in determining its own destiny. Investment is already happening, coupled with considerable public funding and a robust regulatory framework,” said Thierry Breton, commissioner for Internal Market, in comments posted alongside the announcement. “We are becoming an industrial powerhouse in the markets of the future — capable of supplying ourselves and the world with both mature and advanced semiconductors. Semiconductors that are essential building blocks of the technologies that will shape our future, our industry, and our defense base,” he said. The European Union’s Chips Act is not the only government-backed plan aimed at shoring up domestic chip manufacturing in the wake of the supply chain crisis that has plagued the semiconductor industry in recent years. In the past year, the US, UK, Chinese, Taiwanese, South Korean, and Japanese governments have all announced similar plans.


Microsoft Copilot Brings AI to Windows 11, Works Across Multiple Apps and Your Phone

With Copilot, it's possible to ask the AI to write a summary of a book in the middle of a Word document, or to select an image and have the AI remove the background. In one example, Microsoft showed a long email and demonstrated that when you highlight the text, Copilot appears so you can ask it questions related to the email. And that information can be cross-referenced to information found online, such as asking Copilot for lunch spots nearby based on the email's content. Copilot will be available on the Windows 11 desktop taskbar, making it instantly available at one click. Microsoft says that whether you're using Word, PowerPoint or Edge, you can call on Copilot to assist you with various tasks. It can also be called on via voice. Copilot can connect to your phone, so, for example, you can ask it when your next flight is and it'll look through your text messages and find the necessary information. Edge, Microsoft's web browser, will also have Copilot integrations. 


What Are the Biggest Lessons from the MGM Ransomware Attack?

Ransomware groups increasingly focus on branding and reputation, according to Ferhat Dikbiyik, head of research at third-party risk management software company Black Kite. “When ransomware first made its appearance, the attacks were relatively unsophisticated. Over the years, we have observed a marked elevation in their capabilities and tactics,” he tells InformationWeek in a phone interview. ... The group also called out: “The rumors about teenagers from the US and UK breaking into this organization are still just that -- rumors. We are waiting for these ostensibly respected cybersecurity firms who continue to make this claim to start providing solid evidence to support it.” Dikbiyik also notes that ransomware groups’ more nuanced selection of targets is an indication of increased professionalism. “These groups are doing their homework. They have resources. They acquire intelligence tools…they try to learn their targets,” he says. While ransomware is lucrative, money isn’t the only goal. Selecting high-profile targets, such as MGM, helps these groups to build a reputation, according to Dikbiyik.


A Dimensional Modeling Primer with Mark Peco

“Dimensional models are made up of two elements: facts and dimensions,” he explained. “A fact quantifies a property (e.g., a process cost or efficiency score) and is a measurement that can be captured at a point in time. It’s essentially just a number. A dimension provides the context for that number (e.g., when it was measured, who was the customer, what was the product).” It’s through combining facts and dimensions that we create information that can be used to answer business questions, especially those that relate to process improvement or business performance, Peco said. Peco went on to say that one of the biggest challenges he sees with companies using dimensional models is with integrating the potentially huge number of models into one coherent picture of the business. “A company has many, many processes,” he said, “and each requires its own dimensional model, so there has to be some way of joining these models together to give a complete picture of the organization.” 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden