Daily Tech Digest - January 01, 2023

New Year’s resolutions for cloud pros

We live in days when cloud skills are defined by specialization. People aren’t just cloud database experts, they are experts on a specific cloud database on a specific cloud provider. The same can be said for cloud-based business intelligence, a specific SaaS provider, or cloud operations focused on a specific OS configuration. We seem to fall into niches. This limits your options if your specific cloud technology becomes less popular. It’s better to have a skill waiting in the wings than to learn one at the last minute. Look at job sites to see what skills are most in demand that are somewhat related to your current skills and obtain the basic chops that will allow you to talk your way into a new gig if needed. For instance, if you’re focused just on a single cloud object database, perhaps learn about one or two other object databases on another cloud provider. This should be a relatively easy transition given that the concepts are much the same. You can diversify even more, such as learning about cloud-native development if you’re currently a cloud developer.
 

The one real problem with synthetic media

Synthetic media promises a very near future in which advertisements are custom generated for each customer, super realistic AI customer service agents answer the phone even at small and medium-sized companies, and all marketing, advertising and business imagery is generated by AI, rather than human photographers and graphics people. The technology promises AI that writes software, handles SEO, and posts on social media without human intervention. Great, right? The trouble is that few are thinking about the legal ramifications. Let’s say you want your company’s leadership to be presented on an “About Us” page on your website. Companies now are pumping existing selfies into an AI tool, choosing a style, then generating fake photos that all look like photos taken in the same studio with the same lighting, or painted by the same artist with the same style and palate of colors. But the styles are often “learned” by the AI by processing (in legal terms) the intellectual property of specific photographers or artists.


The Curious Case of Linux: It’s for Everyone, but Nobody Uses it

There are three main reasons that users shy away from using Linux. The first is the perceived unintuitiveness of the OS, which is the biggest fear of new users. The second is the lack of support for applications, games, and devices – a problem that has plagued Linux forever. The third, and most questionable, is the toxic fanbase associated with the operating system, which commonly undermines the efforts of newcomers to the ecosystem. Command line interface nightmares are the most-quoted reasons for newcomers to join the ecosystem. In addition to this, software developers rarely optimise applications for use in Linux, making compatibility a nightmare for creators and power users. To combat this, the community has come up with distros that inherently require less technical know-how than others. One of the best examples of this is Pop!_OS. ... Another major problem that average users have with Linux is not only the lack of software, but a lack of support for games.


Building Security Champions

A Security Champion is a team member that takes on the responsibility of acting as the primary advocate for security within the team and acting as the first line of defense for security issues within the team. Or, more plainly: The person who is most excited about security on a team. They want to read the book, fix the bug, or ask security questions. Every time. Security champions are your communicators. They deliver security messages to each dev team, teaching, sharing, and helping. They are your point of contact, delivering messages to and from the security team and keeping you up to date on what matters to your team. They are your advocate. They perform security work, for their dev team, with your help. They also advocate for security, asking questions in situations you would have been left out of. Raising concerns you might have missed. They are a peer for everyone on their team and can influence in ways that you yourself cannot. In the next few paragraphs, we will cover how to build an amazing security champions program!


Italian Healthcare Group Targeted in Data-Leaking Shakedown

The criminals claim they reached out directly to hospital staff: "We has also ask some of employees during phone calls about the incident but they answered that they didn't heard about any breach. So, they were asked to review the evidence in Live Chat and we have repeatedly tried to make it clear that hundreds of thousands of personal data have been compromised due to their negligence." The criminals add: "Our advise is to replace the entire IT staff and have them undergo proficiency tests and check them for budget wasting as well." Take all such posturing and self-serving announcements with a big grain of salt, says Brett Callow, a threat analyst at security firm Emsisoft who closely tracks ransomware groups' activities. ... "Why do they do this? It's all about PR and branding. They think that organizations may be less likely to want to hand money to the type of evil criminals who are happy to put lives at risk by carrying out financially motivated attacks on hospitals."


Workplace Trends You Need to Know for 2023

As we near the end of 2022, a shift is happening — for the better. The U.S. Surgeon General reported that 71% of employees believe their employer is more concerned about their mental health and wellbeing than ever before. This is a huge step forward and one we must grasp and run with. In response, the U.S. Surgeon General released a framework that aims to support workplaces in better improving the mental health and wellbeing of their employees. This includes: Ensuring there is an opportunity for growth, valuing employee contributions, enhancing social connections in the workplace and focusing on achieving better work-life integration. We're likely to see more mental wellbeing initiatives and strategies employed across businesses that deliver meaningful and practical help to their employees — from self-care days off once a month to increased wellbeing benefits, mental health first aid training and even adaptations to the workplace.


US Congress funds cybersecurity initiatives in FY2023 spending bill

The bill stipulates that no government agency may use their funds to buy telecom equipment from Chinese tech giants Huawei or ZTE for “high or moderate impact information systems,” as determined by the National Institute of Standards and Technology (NIST). It further states that agencies cannot use any of their funds for technology, including biotechnology, digital, telecommunications, and cyber, developed by the People’s Republic of China unless the secretary of state, in consultation with the USAID administrator and the heads of other federal agencies, as appropriate, determines that such use does not adversely impact the national security of the United States. Moreover, no agency can spend funds on entities owned, directed, or subsidized by China, Iran, North Korea, or Russia unless the FBI or other appropriate federal entity has assessed any risk of cyber espionage or sabotage associated with acquisitions from these entities. ... Finally, the bill amends the Federal Food, Drug, and Cosmetic Act to make medical device makers meet specific cybersecurity standards. 


Cloud Adoption Plans Accelerate, Highlighting Need for Qualified IT

As organizations transition to providing digital solutions in a digital workplace, public, multi, and hybrid cloud adoption is on the rise. Farid Roshan, global head of digital enablement practice at Altimetrik, says the transitional data center mindset leads to high sunk costs for procuring appliances and difficulty in attaining talent to support data center maintenance activities. “Organizations lose precious time and energy focusing on managing infrastructure vs. building products that bring value to their customers,” he says. From his perspective, public cloud platforms provide IT teams the ability to focus on creating innovative solutions and attracting highly skilled talent to develop products that drive business growth, while reducing overall IT cost of ownership. Roshan adds cloud adoption can lead to unexpected delays and failure in transforming organizations if the cloud strategy is not well understood across the organization. “Understanding the goals for moving to the cloud as well as implementing an executive cloud strategy, defining a roadmap and OKRs, will allow for business and IT groups to align their annual and quarterly goals,” he says.


What is the role of the data manager?

The data manager’s function is essentially to oversee the value chain and ensure data is delivered effectively, says Carruthers. “This means helping create data which is accessible, usable and safe. Information can then be delivered to the right place and in a good condition so it can be used in the most effective way possible.” Carruthers compares the role of data manager to the conductor in an orchestra. “The manager is there to oversee the whole data team, rather than frantically trying to play every instrument themselves. As the orchestra analogy suggests, it is a data manager’s role to ensure the song sheet is followed by every team member. This means managing the use of data to ensure it goes through the correct value chain.” The data manager role is not just about being “good with data”. It involves a combination of technical and interpersonal skills, says Andy Bell, vice president global data product management at data integrity specialist Precisely. As well as technical skills, he says data managers need to have “a thorough understanding about the application of technology”. 


Cybercriminals create new methods to evade legacy DDoS defenses

Attackers will continue to make their mark in 2023 by trying to develop new ways to evade legacy DDoS defenses. We saw Carpet Bomb attacks rearing their head in 2022 by leveraging the aggregate power of multiple small attacks, designed specifically to circumvent legacy detect-and-redirect DDoS protections or neutralize ‘black hole’ sacrifice-the-victim mitigation tactics. This kind of cunning will be on display as DDoS attackers look for new ways of wreaking havoc across the internet and attempt to outsmart existing thinking around DDoS protection. In 2023, the cyberwarfare that we have witnessed with the conflict in Ukraine will undoubtedly continue. DDoS will continue to be a key weapon in the Ukrainian and other conflicts both to paralyse key services and to drive political propaganda objectives. DDoS attack numbers rose significantly after the Russian invasion in February and DDoS continues to be used as an asymmetric weapon in the ongoing struggle.



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller

Daily Tech Digest - December 31, 2022

Credentials Are the Best Chance To Catch the Adversary

It used to be that attackers would batter the networks of their targets. Now, they may use LinkedIn and social media to identify your employees’ personal email accounts, hack them, and look for other credentials. External actors may also identify unhappy employees posting negative reviews on Glassdoor and offer to buy their credentials. Or these actors may just boldly call your employees out of the blue and offer to pay them for their login information and ongoing approval of multi-factor authentication (MFA) prompts. As a result, MFA is no longer a reliable tool in preventing attacks, as it can be easily gamed by malicious insiders. ... Not every attack uses stolen credentials to gain initial access to networks, but every attack eventually involves credentials. After gaining access to networks, bad actors see who has privileged access. ... Between nation-state actors, criminal gangs, computer-savvy teenagers and disgruntled insiders, the likelihood is that your network has already been penetrated. What you need now is to detect these attacks at speed to minimize their damage.


Artificial Intelligence Without The Right Data Is Just... Artificial

Successful AI “requires data diversity,’ says IDC analyst Ritu Jyoti in a report from earlier in 2022. “Similarly, the full transformative impact of AI can be realized by using a wide range of data types. Adding layers of data can improve accuracy of models and the eventual impact of applications. For example, a consumer's basic demographic data provides a rough sketch of that person. If you add more context such as marital status, education, employment, income, and preferences like music and food choices, a more complete picture starts to form. With additional insights from recent purchases, current location, and other life events, the portrait really comes to life.” To enable AI to scale and proliferate across the enterprise, “stakeholders must ensure a solid data foundation that enables the full cycle of data management, embrace advanced analytical methods to realize the untapped value of data,” says Shub Bhowmick, co-founder and CEO of Tredence. “In terms of data availability and access, businesses need a way to parse through huge tracts of data and surface what’s relevant for a particular application,” says Sachdev.


Web3, the Metaverse and Crypto: Trends to Expect in 2023 and Beyond

If something good can come from FTX, it is that more regulations are coming, especially for centralized crypto exchanges, along with stricter rules on investor protection in the crypto trading space. Even Congress is paying attention, having summoned SBF for a congressional hearing (he was arrested the day before the scheduled hearing). These regulations are overdue – I have advocated for regulating centralized crypto exchanges since 2017. However, it’s better late than never. Legislators and regulators world-wide have zeroed in on the crypto market with an attempt to lay out rules, which hopefully prevents future catastrophes such as FTX. But legislators and regulators must be cautious in their approach, making sure not to stifle Web3 innovation. If they understand the difference between cryptocurrency as an asset class that trades on a centralized trading platform, and innovation that utilizes Web3 technology, and stick to investor protection while creating a welcoming environment for the development of Web3 applications, then we might be expecting a favorable legislative environment both for investors and developers.


Microservices Integration Done Right Using Contract-Driven Development

When all the code is part of a monolith, the API specification for a service boundary may just be a method signature. Also, these method signatures can be enforced through mechanisms such as compile time checks, thereby giving early feedback to developers. However, when a service boundary is lifted to an interface such as http REST API by splitting the components into microservices, this early feedback is lost. The API specification, which was earlier documented as an unambiguous method signature, now needs to be documented explicitly to convey the right way of invoking it. This can lead to a lot of confusion and communication gaps between teams if the API documentation is not machine parsable. ... Adopting an API specification standard such as OpenAPI or AsyncAPI is critical to bring back the ability to communicate API signatures in an unambiguous and machine-readable manner. While this adds to developers’ workload to create and maintain these specs, the benefits outweigh the effort.


The Threat of Predictive Policing to Data Privacy and Personal Liberty

It's not just related to law enforcement targeting; it's also related to any legal decisions. Custody decisions, civil suit outcomes, insurance decisions, and even hiring decisions can all be influenced by the RELX-owned LexisNexis system, which gathers and aggregates data. Unfortunately, there's little recourse for someone who was unfairly treated due to a data-based risk assessment because people are rarely privy to the way these decisions are made. So, a corporate HR manager or Family Court judge could be operating off bad or incomplete data when making decisions that could effectively change lives. RELX and Thomson Reuters have disclaimers freeing them from liability for inaccurate data, which means your information could be mixed in with someone else's, causing serious repercussions in the wrong circumstances. In 2016, a man named David Alan Smith successfully sued LexisNexis Screening Solutions when the company provided his prospective employer with an inaccurate background check. 


10 digital twin trends for 2023

Over the last year, the world has been wowed by how easy it is to use ChatGPT to write text and Stable Diffusion to create images. ... Over the next year, we can expect more progress in connecting generative AI techniques with digital twin models for describing not only the shape of things but how they work. Yashar Behzadi, CEO and founder of Synthesis AI, a synthetic data tools provider, said, “This emerging capability will change the way games are built, visual effects are produced and immersive 3D environments are developed. For commercial usage, democratizing this technology will create opportunities for digital twins and simulations to train complex computer vision systems, such as those found in autonomous vehicles.” ... Hybrid digital twins make it easier for CIOs to understand the future of a given asset or system. They will enable companies to merge asset data collected by IoT sensors with physics data to optimize system design, predictive maintenance and industrial asset management. Banerjee foresees more and more industries adopting this approach with disruptive business results in the coming years.


Change Management is Essential for Successful Digital Transformation

Vasantraj notes, “Organizational culture is vital in fostering leadership and enabling enterprises to adapt. Successful teams are built on trust and the ability to put aside self-interest and work together. Teams must think of organizations as a single entity and keep a growth mindset.” This type of collaborative culture doesn’t emerge without a lot of effort. Amy Ericson, a Senior Vice President at PPG, suggests one way a great change management leader can make their efforts employee-centric is to lead with empathy. She makes three helpful recommendations, “First, ask how your people are. Really ask them. Then, listen. You may find that they’re struggling, and your interest in how they are doing and genuine concern will help them move forward productively. Second, acknowledge their situation and ask how you can help. Do they need access to new tools or resources? Do they need a different schedule? Third, thank them, and follow through. Praise their courage to be honest, and deliver on your promises to help them succeed.”[5] Beyond being an empathetic leader, the BCG team highly recommends getting employees involved from the beginning of the change process.
.

‘There’s a career in cybersecurity for everyone,’ Microsoft Security CVP says

When there’s an abundance of opportunities, there are many ways of getting into that opportunity. We do have an incredible talent shortage. Going back to a myth buster, 37% of the people that we surveyed said that they thought a college degree was necessary to be in security. It’s not true. You don’t need a college degree. Many security jobs don’t require a four-year college degree. You can qualify by getting a certificate, an associate degree from a community college. Hence, why we are working with community colleges. There’s also a lot of resources for free because it can be daunting. The cost itself can be daunting, but there’s a lot of resources. Microsoft has a massive content repository that we have made available. We have made certifications. These are available to anyone who wants to take them, and there are ways you can train yourself and get into cybersecurity. We have this abundance of opportunity, which creates new ways of getting in, and we need to educate people about all these facets about how they can get in.


How the Rise of Machine Identities Impacts Enterprise Security Strategies

First, security leaders must rethink their traditional identity and access management (IAM) strategies. Historically, IAM has focused on human identities authenticating access systems, software and apps on a business network. However, with the rise of containers, APIs and other technology, a secure IAM approach must utilize cryptographic certificates, keys and other digital secrets that protect connected systems and support an organization’s underlying IT infrastructure. With the shift to the cloud, a Zero Trust framework has become the new security standard, where all users, machines, APIs and services must be authenticated and authorized before being able to access apps and data. In the cloud, there is no longer a traditional security perimeter around the data center, so the service identity is the new perimeter. When handling machine identities, fine-grained consent controls are essential in protecting privacy as data is moved between machines. The authorization system discerns the “who, what, where, when, and why” and confirms that the owner has consented to the sharing of that data and the person requesting access isn’t a fraudster. 


3 Predictions For Fintech Companies’ Evolution In 2023

If you spend even five minutes on LinkedIn, you know the debate between in-person, hybrid and distributed work is still a hot one. But what does the data tell us? Owl Lab’s State of Remote Work Report found the number of workers choosing to work remotely in 2022 increased 24%, those choosing hybrid went up 16% and interest for in-office work dropped by 24%. The data keeps rolling in with this McKinsey study that found, when offered, almost everyone takes the opportunity to work flexibly. Companies looking to embrace this flexible work mindset should focus on improving and optimizing synchronous activities like all-hands meetings, lunch and learns, and coffee chats. Supporting asynchronous work is also important. Personally, I’m a champion of written and narrative documentation of projects, which allows people to review and process on their own time and at their own pace. In my experience, this makes meetings even more productive and impactful so people can focus on the outcomes of time spent together. No one has a crystal ball for what the next year holds.  



Quote for the day:

"Leadership matters more in times of uncertainty." -- Wayde Goodall

Daily Tech Digest - December 29, 2022

10 IT certifications paying the highest premiums today

The Certified in the Governance of Enterprise IT (CGEIT) certification is offered by the ISACA to validate your ability to handle “the governance of an entire organization” and can also help prepare you for moving to a C-suite role if you aren’t already in an executive leadership position. The exam covers general knowledge of governance of enterprise IT, IT resources, benefits realization, and risk optimization. To qualify for the exam, you’ll need at least five years of experience in an advisory or oversight role supporting the governance of IT in the enterprise. ... The AWS Certified Security certification is a specialty certification from Amazon that validates your expertise and ability with securing data and workloads in the AWS cloud. The exam is intended for those working in security roles with at least two years of hands-on experience securing AWS workloads. It’s recommended that candidates for the exam have at least five years of IT security experience designing and implementing security solutions. ... To earn the certification, you will need to pass the AWS Certified Security Specialty exam, which consists of multiple choice and multiple response questions.


When will cloud computing stop growing?

So, no matter where the market goes, and even if the hyperscalers begin to seem more like legacy technology, the dependencies will remain and growth will continue. The hyperscaler market could become more complex and fragmented, but public clouds are the engines that drive growth and innovation. Will it stop growing at some point? I think there are two concepts to consider: First, cloud computing as a concept. Second, the utility of the technology itself. Cloud computing is becoming so ubiquitous, it will likely just become computing. If we use mostly cloud-based consumption models, the term loses meaning and is just baked in. I actually called for this in a book I wrote back in 2009. Others have called for this as well, but it’s yet to happen. When it does, my guess is that the cloud computing concept will stop growing, but the technology will continue to provide value. The death of a buzzword. The utility, which is the most important part, carries on. Cloud computing, at the end of the day, is a much better way to consume technology services. The idea of always owning our own hardware and software, running our own data centers, was never a good one.


Modernise and Bolster Your Data Management Practice with Data Fabric

Data has emerged as an invaluable asset that can not only be used to power businesses but can also be put to the wrong use for individual benefit. With stringent regulatory norms around data handling and management in place, data security, governance and compliance need dedicated attention. Data fabric can significantly improve security by integrating together data and applications from across physical and IT systems. It enables a unified and centralized route to create policies and rules. The ability to automatically link policies and rules basis metadata such as data classifications, business terms, user groups, roles, and more, including policies on data access controls, data privacy, data protection, and data quality ensures optimized data governance, security, and compliance. Changing business dynamics require businesses to be ahead of the curve by virtue of aptly and actively using data. Data fabric is a data operational layer that weaves through huge volumes of data from multiple sources and processes it using machine learning enabling businesses to discover patterns and insights in real-time. 


It’s a Toolchain!

Even ‘one’ toolchain is really not the same chain of tools; it is the same CI/CD tool managing a pool of others. This has really interesting connotations for the idea of the “weakest link in the chain,” whether we’re talking security, compliance or testing, because the weakest link might depend on which tools are spawned this run. To take an easy example that doesn’t overlap with the biggest reason above—targeting containers for test and virtual machines (VMs) for deployment. Some organizations do this type of thing regularly due to licensing or space issues. Two different deployment steps in ‘one’ toolchain. There are more instances like this than you would think. “This project uses make, that one uses cmake” is an example of the type of scenarios we’re talking about. These minor variations are handled by what gets called from CI. Finally, most of the real-life organizations I stay in touch with are both project-based and are constantly evolving. That makes both of the above scenarios the norms, not the exceptions. While they would love to have one stack and one toolchain for all projects, no one realistically sees that happening anytime soon. 


How DevOps is evolving into platform engineering

Platform engineering is the next big thing in the DevOps world. It has been around for a few years. Now the industry is shifting toward it, with more companies hiring platform engineers or cloud platform engineers. Platform engineering opens the door for self-service capabilities through more automated infrastructure operations. With DevOps, developers are supposed to follow the "you build it, you run it" approach. However, this rarely happens, partly because of the vast number of complex automation tools. Since more and more software development tools are available, platform engineering is emerging to streamline developers' lives by providing and standardizing reusable tools and capabilities as an abstraction to the complex infrastructure. Platform engineers focus on internal products for developers. Software developers are their customers, and platform engineers build and run a platform for developers. Platform engineering also treats internal platforms as a product with a heavy focus on user feedback. Platform teams and the internal development platform scale out the benefits of DevOps practices. 


Top 5 Cybersecurity Trends to Keep an Eye on in 2023

Cyber security must evolve to meet these new demands as the world continues shifting towards remote and hybrid working models. With increased reliance on technology and access to sensitive data, organizations need to ensure that their systems are secure and their employees are equipped to protect against cyber threats. Organizations should consider implementing security protocols such as Multi-Factor Authentication (MFA), which requires additional authentication steps to prove the user’s identity before granting access to systems or data. MFA can provide an additional layer of protection against malicious actors who may try to access accounts with stolen credentials. Businesses should also consider developing policies and procedures for securing employee devices. This could include providing employees with secure antivirus software and encrypted virtual private networks (VPNs) for remote connections. Additionally, employees should be trained on the importance of strong passwords, unique passwords for each account, and the dangers of using public networks.


Understanding Data Management, Protection, and Security Trends to Design Your 2023 Strategy

Today more than ever there is a need for a modernized approach towards data security considering that the threats are increasingly getting sophisticated. Authentication-as-a-Service with built-in SSO capabilities, tightly integrated with Cloud apps will secure online access. Data encryption solutions with comprehensive key management solutions will help customers protect their digital assets whether on-premise or cloud. EDRM solutions with the widest file and app support will aide customers to protect and have control over their data even outside their networks. DLP solutions with integrated user behavior analysis (UBA) modules provide customers leverage their investment in their DLP. Data discovery and classification help organizations get complete visibility into sensitive data with efficient data discovery, classification, and risk analysis across heterogeneous data stores. These are some approaches organizations can benefit from OEMs designing data security solutions and products.


US-China chip war puts global enterprises in the crosshairs

“In addition to the chipmakers and semiconductor manufacturers in China, every company on the supply chain of advanced chipsets, such as the electronic vehicle manufacturers and HPC [high performance computing] makers in China, will be hit," said Charlie Dai, research director at market research firm Forrester. "There will also be collateral damage to the global technology ecosystem in every area, such as the chip design, tooling, and raw materials.” Enterprises might not feel the burn right away, since interdependencies between China and the US will be hard to unwind immediately. For example, succumbing to pressure from US businesses, in early December the US Department of Defense said it would allow its contractors to use chips from the banned Chinese chipmakers until 2028. In addition, the restrictions are not likely to have a direct effect on the ability of the global chip makers to manufacture semiconductors, since they have not been investing in China to manufacture chips there, said Pareekh Jain, CEO at Pareekh Consulting.


Financial Services Was Among Most-Breached Sectors in 2022

The practice of attackers sneaking so-called digital skimmers - typically, JavaScript code - onto legitimate e-commerce or payment platforms also continues. These tactics, known as Magecart-style attacks, most often aim to steal payment card data when a customer goes to pay. Attackers either use that data themselves or batch it up into "fullz," referring to complete sets of credit card information that are sold via a number of different cybercrime forums. Innovation continues among groups that practice Magecart tactics. In recent weeks, reports application security vendor Jscrambler, three different attack groups have begun wielding new, similar tactics designed to inject malicious JavaScript into legitimate sites. One of the groups has been injecting a "Google Analytics look-alike script" into victims' pages, while another has been injecting a "malicious JavaScript initiator that is disguised as Google Tag Manager." The third group is also injecting code, but does so by having registered the domain name for Cockpit, a free web marketing and analytics service that ceased operations eight years ago. 


Microservices Integration Done Right Using Contract-Driven Development

Testing an application is not just about testing the logic within each function, class, or component. Features and capabilities are a result of these individual snippets of logic interacting with their counterparts. If a service boundary/API between two pieces of software is not properly implemented, it leads to what is popularly known as an integration issue. Example: If functionA calls functionB with only one parameter while functionB expects two mandatory parameters, there is an integration/compatibility issue between the two functions. Such quick feedback helps us course correct early and fix the problem immediately. However, when we look at such compatibility issues at the level of microservices where the service boundaries are at the http, messaging, or event level, any deviation or violation of the service boundary is not immediately identified during unit and component/api testing. The microservices must be tested with all their real counterparts to verify if there are broken interactions. This is what is broadly (and in a way wrongly) classified as integration testing.



Quote for the day:

"To command is to serve : nothing more and nothing less." -- Andre Marlaux

Daily Tech Digest - December 28, 2022

The 5-step plan for better Fraud and Risk management in the payments industry

The overall complexity and size of the digital payments industry make it extremely difficult to detect fraud. In this context, merchants and payment companies can introduce fraud monitoring and anti-fraud mechanisms that verify every transaction in real-time. The AI-based systems can take into account different aspects such as suspicious transactions, for example, amount, unique bank card token, user’s digital fingerprint, the IP address of the payer, etc., to evaluate the authenticity. Today, OTPs are synonymous with two-factor authentication and are thought to augment existing passwords with an extra layer of security. Yet, fraudsters manage to circumvent it every day. With Out-of-Band Authentication solutions in combination with real-time Fraud Risk management solutions, the service provider can choose one of many multi-factor authentication options available during adaptive authentication, depending on their preference and risk profile Just like 3D Secure, this is another internationally-accepted compliance mechanism that ensures that all the intermediaries involved in the payments system must take special care of the sensitive client information. 


The Importance of Pipeline Quality Gates and How to Implement Them

There is no doubt that CI/CD pipelines have become a vital part of the modern development ecosystem that allows teams to get fast feedback on the quality of the code before it gets deployed. At least that is the idea in principle. The sad truth is that too often companies fail to fully utilize the fantastic opportunity that a CI/CD pipeline offers in being able to provide rapid test feedback and good quality control by failing to implement effective quality gates into their respective pipelines. A quality gate is an enforced measure built into your pipeline that the software needs to meet before it can proceed to the next step. This measure enforces certain rules and best practices that the code needs to adhere to prevent poor quality from creeping into the code. It can also drive the adoption of test automation, as it requires testing to be executed in an automated manner across the pipeline. This has a knock-on effect of reducing the need for manual regression testing in the development cycle driving rapid delivery across the project.


Best of 2022: Measuring Technical Debt

Of the different forms of technical debt, security and organizational debt are the ones most often overlooked and excluded in the definition. These are also the ones that often have the largest impact. It is important to recognize that security vulnerabilities that remain unmitigated are technical debt just as much as unfixed software defects. The question becomes more interesting when we look at emerging vulnerabilities or low-priority vulnerabilities. While most will agree that known, unaddressed vulnerabilities are a type of technical debt, it is questionable if a newly discovered vulnerability is also technical debt. The key here is whether the security risk needs to be addressed and, for that answer, we can look at an organization’s service level agreements (SLAs) for vulnerability management. If an organization sets an SLA that requires all high-level vulnerabilities be addressed within one day, then we can say that high vulnerabilities older than that day are debt. This is not to say that vulnerabilities that do not exceed the SLA do not need to be addressed; only that vulnerabilities within the SLA represent new work and only become debt when they have exceeded the SLA.


DevOps Trends for Developers in 2023

Security automation is the concept of automating security processes and tasks to ensure that your applications and systems remain secure and free from malicious threats. In the context of CI/CD, security automation ensures that your code is tested for vulnerabilities and other security issues before it gets deployed to production. In addition, by deploying security automation in your CI/CD pipeline, you can ensure that only code that has passed all security checks is released to the public/customers. This helps to reduce the risk of vulnerabilities and other security issues in your applications and systems. The goal of security automation in CI/CD is to create a secure pipeline that allows you to quickly and efficiently deploy code without compromising security. Since manual testing might take a lot of time and developers' time, many organizations are integrating security automation in their CI/CD pipeline today. ... Also, the introduction of AI/ML in the software development lifecycle (SDLC) is getting attention as the models are trained to detect irregularities in the code and give suggestions to enhance or rewrite it.


What Brands Get Wrong About Customer Authentication

When comparing friction for customers with security accounts and practical security needs, one of the main challenges is convincing the revenue side of a business of the need for best practice from a security standpoint. Cybersecurity teams must demonstrate that the financial risks of not putting security in place - i.e., fraud, account takeover, reputation loss, regulatory fines, lawsuits, etc. - overwhelm the loss of revenue and abandonment of transactions on the other side. There are always costs associated with security systems, but comparing the costs associated with fraud to those of implementing new security measures will justify the purchase. There is a fine balance between having effective security and operating a business. Customers quickly become frustrated by jumping through hoops to log in, and the password route is unsustainable. It’s time to look at the relationship between security and authentication and develop solutions for both. Taking authentication to the next level requires thinking outside the box. If you want to implement an authentication strategy that doesn’t drive away customers, you need to make customer experience the focal point.


Video games and robots want to teach us a surprising lesson. We just have to listen

The speedy, colorful ghosts zooming their way around the maze greeted me as I stared at the screen of a Pac-Man machine, a part of the 'Never Alone: Video Games and Other Interactive Design' exhibit of the Museum of Modern Art in New York City. Using the tiniest amount of RAM and code, each ghost is programmed with its own specific behaviors, which combine to create the masterpiece work, according to Paul Galloway, collection specialist for the Architecture and Design Department. This was the first time I'd seen video games inside a museum, and I had come to this exhibit to see if I could glean some insight into technology through the lens of art. It's an exhibit that is more timely now more than ever, as technology has been absorbed into nearly every facet of our lives both at work and at home -- and what I learnt is that our empathy with technology is leading to new kinds of relationships between ourselves and our robot friends. ... According to Galloway, the Never Alone exhibit is linked to an Iñupiaq video game included in the exhibit called Never Alone (Kisima Ingitchuna). 


The increasing impact of ransomware on operational technology

To protect against initial intrusion of networks, organisations must consistently find and remediate key vulnerabilities and known exploits, while monitoring the network for attack attempts. Also, wherever possible equipment should be kept up-to-date. VPNs in particular need close attention from cyber security personnel; new VPN keys and certificates must be created, with logging of activity over VPNs being enabled. Access to OT environments via VPNs calls for architecture reviews, multi-factor authentication (MFA) and jump hosts. In addition, users should read emails in plain text only, as opposed to rendering HTML, and disable Microsoft Office macros. For network access attempts from threat actors, organisations should perform an architecture review for routing protocols involving OT, and monitor for the use of open source tools. MFA should be implemented to access OT systems, and intelligence sources utilised for threat and communication identification and tracking.


The security risks of Robotic Process Automation and what you can do about it

RPA credentials are often shared so they can be used repeatedly. Because these accounts and credentials are left unchanged and unsecured, a cyber attacker can steal them, use them to elevate privileges, and move laterally to gain access to critical systems, applications, and data. In addition, users with administrator privileges can retrieve credentials stored in locations that are not secured. As many enterprises leveraging RPA have numerous bots in production at any given time, the potential risk is very high. Securing the privileged credentials utilised by this emerging digital workforce is an essential step in securing RPA workflows. ... The explosion in identities is putting more pressure on security teams since it leads to the creation of more vulnerabilities. The management of machine identities, in particular, poses the biggest problem, given that they can be generated quickly without consideration for security protocols. Further, while credentials used by humans often come with organisational policy that mandates regular updates, those used by robots remain unchanged and unmanaged. 


Best of 2022: Using Event-Driven Architecture With Microservices

Most existing systems live on-premises, while microservices live in private and public clouds so the ability for data to transit the often unstable and unpredictable world of wide area networks (WANs) is tricky and time-consuming. There are mismatches everywhere: updates to legacy systems are slow, but microservices need to be fast and agile. Legacy systems use old communication mediums, but microservices use modern open protocols and APIs. Legacy systems are nearly always on-premise and at best use virtualization, but microservices rely on clouds and IaaS abstraction. The case becomes clear – organizations need an event-driven architecture to link all these legacy systems versus microservices mismatches. ... Orchestration is a good description – composers create scores containing sheets of music that will be played by musicians with differing instruments. Each score and its musician are like a microservice. In a complex symphony with a hundred musicians playing a wide range of instruments – like any enterprise with complex applications – far more orchestration is required.


Scope 3 is coming: CIOs take note

Many companies in Europe have built teams to address IT sustainability and have appointed directors to lead the effort. Gülay Stelzmüllner, CIO of Allianz Technology, recently hired Rainer Karcher as head of IT sustainability. “My job is to automate the whole process as much as possible,” says Karcher, who was previously director of IT sustainability at Siemens. “This includes getting source data directly from suppliers and feeding that into data cubes and data meshes that go into the reporting system on the front end. Because it’s hard to get independent and science-based measurements from IT suppliers, we started working with external partners and startups who can make an estimate for us. So if I can’t get carbon emissions data directly from a cloud provider, I take my invoices containing consumption data, and then take the location of the data center and the kinds of equipment used. I put all that information to a rest API provided by a Berlin-based company, and using a transparent algorithm, they give me carbon emissions per service.” Internally speaking, the head of IT sustainability role has become more common in Europe—and some of the more forward-thinking US CIOs are starting to see the need in their own organizations.



Quote for the day:

"The only way to follow your path is to take the lead." -- Joe Peterson

Daily Tech Digest - December 27, 2022

Prepping for 2023: What’s Ahead for Frontend Developers

WebAssembly will work alongside JavaScript, not replace it, Gardner said. If you don’t know one of the languages used by WebAssembly — which acts as a compiler — Rust might be a good one to learn because it’s new and Gardner said it’s gaining the most traction. Another route to explore: Blending JavaScript with WebAssembly. “Rust to WebAssembly is one of the most mature paths because there’s a lot of overlap between the communities, a lot of people are interested in both Rust and WebAssembly at the same time,” he said. “Plus, it’s possible to blend WebAssembly with JavaScript so it’s not an either-or situation necessarily.” That in turn will yield new high-performing applications running on the web and mobile, Gardner added. “You’re not going to see necessarily a ‘Made with WebAssembly’ banner show up on websites, or anything along those lines, but you are going to see some very high-performing applications running on the web and then also on mobile, built off of WebAssembly,” he said. ... “Organizations are trying to automate and improve their test automation, and part of that shift to shipping faster means, you have to find ways to optimize what you’re doing,” DeSanto said.


What is FinOps? Your guide to cloud cost management

“FinOps brings financial accountability — including financial control and predictability — to the variable spend model of cloud,” says J.R. Storment, executive director of the FinOps Foundation. “This is increasingly important as cloud spending makes up ever more of IT budgets.” It also enables organizations to make informed trade-offs between speed, cost, and quality in their cloud architecture and investment decisions, Storment says. “And organizations get maximum business value by helping engineering, finance, technology, and business teams collaborate on data-driven spending decisions,” he says. Aside from bringing together the key people who can help an organization gain better control of its cloud spending, FinOps can help reduce cloud waste, which IDC estimates between 10% to 30% for organizations today. “Moving from show-back cloud accounting, where IT still pays and budgets for cloud spending, to a charge-back model, where individual departments are accountable for cloud spending in their budget, is key to accelerating savings and ensuring only necessary cloud projects are implemented,” Jensen says.


IoT Analytics: Making Sense of Big Data

The principles that guide enterprises in the way they approach IoT analytics data are: Data is an asset: Data is an asset that has a specific and measurable value for the enterprise.Data is shared: Data must be shared across the enterprise and its business units. Users have access to the data that is necessary to perform their activities; Data trustees: Each data element has trustees accountable for data quality; Common vocabulary and data definitions: Data definition is consistent, and the taxonomy is understandable throughout the enterprise; Data security: Data must be protected from unauthorised users and disclosure; Data privacy: Privacy and data protection is considered throughout the life cycle of a Big Data project. All data sharing conforms to the relevant regulatory and business requirements; and Data integrity and the transparency of processes: Each party to a Big Data analytics project must be aware of and abide by their responsibilities regarding the provision of source data and the obligation to establish and maintain adequate controls over the use of personal or other sensitive data.


Reframing our understanding of remote work

The remote and hybrid work trend is the most disruptive change in how businesses work since the introduction of the personal computer and mobile devices. Then, like now, the conversation was lost in the weeds. Should we allow PCs? Should we allow employees to bring their own devices? Should we issue pagers, feature phones, then smartphones to employees or let them use their own? In hindsight, it's clear that all these concerns were utterly pointless. The PC revolution was a tsunami of certainty that would wash away old ways of doing everything. So the only question should have been: How do we ensure these devices are empowering, secure, and usable? All focus should have been on the massive learning curve by organizations (what's the best way to deploy, update, secure, provision, purchase, and network these devices for maximum benefit) And by end users. In other words, while everyone gnashed their teeth over whether to allow devices — or what kind or level of devices to allow — the energy could have been much better spent realizing the entire issue was about skills and knowledge.


Developing Successful Data Products at Regions Bank

Misra said that there are a few especially important components involved in the success of the data product partner role and the discipline of product management for analytics and AI initiatives. One is to ensure that the partner role is strategic, proactive, and focused on critical business needs, and not simply an on-demand service within the company. All data products should address a critical business priority for partners and, when deployed, should deliver substantial incremental value to the business. The teams that work on the products should employ agile methods and include data scientists, data managers, data visualization experts, user interface designers, and platform and infrastructure developers. Misra is a fan of software engineering disciplines — systematic techniques for the analysis, design, implementation, testing, and maintenance of software programs — and believes that they should be employed in data science and data products as well. This product orientation also requires that there’s a big-picture focus, not just by the data product partners but by everyone on the product development teams. 


Amplified security trends to watch out for in 2023

Cybercriminals target employees across different industries to surreptitiously recruit them as insiders, offering them financial enticements to hand over company credentials and access to systems where sensitive information is stored. This approach isn’t new, but it is gaining popularity. A decentralized work environment makes it easier for criminals to target employees through private social channels, as the employee does not feel that they are being watched as closely as they would in a busy office setting. Aside from monitoring user behavior and threat patterns, it’s important to be aware of and be sensitive about the conditions that could make employees vulnerable to this kind of outreach – for example, the announcement of a massive corporate restructuring or a round of layoffs. Not every employee affected by a restructuring suddenly becomes a bad guy, but security leaders should work with Human Resources or People Operations and people managers to make them aware of this type of criminal scheme, so that they can take the necessary steps to offer support to employees who could be affected by such organizational or personal matters. 


What is the Best Cloud Strategy for Cost Optimization?

More often than not, some resources are underutilized. This usually stems from overbudgeting for certain processes. For instance, a cloud computing instance may be underutilized to the point that it uses less than 5% of its CPU. Note that with cloud services, you pay for the storage and computing power, rather than the space. In the instance highlighted above, it’s clear that there’s a case of significant waste. In your bid to optimize costs, it’s best to identify these idle instances and consolidate the workload into fewer cloud instances. It can be difficult to understand how much power the system uses without adequate visualization. Heat maps are highly useful in cloud cost optimization. This infographic tool highlights computing demand and consumption’s high and low points. This data can be useful in establishing stop and start times for cost reduction. Visual tools like heat maps can help you identify clogged-up sections before they become problematic. When a system load becomes one-directional, you know it’s time to adjust and balance it before it disrupts your processes.


Server supply chain undergoes shift due to geopolitical risks

Adding to the motivation to exit China and Taiwan was the saber rattling and increasingly bellicose tone from Beijing to Taiwan, along with fairly severe sanctions on semiconductor sales from the U.S. Department of Commerce. This has led some US-based cloud service providers, such as Google, AWS, Meta, and Microsoft, to look at adding server production lines outside Taiwan as a precautionary measure, according to TrendForce. There have been a number of other moves as well. In the US, Intel is spending $20 billion on an Arizona fab and another $20 billion on fabs in Ohio. TSMC is spending $40 billion on fabs in Arizona as well, and Apple is moving production to the US, Mexico, India, and Vietnam. TrendForce also noted a phenomenon it calls “fragmentation” as an emerging model in the management of the server supply chain. It used to be that server production and the assembly process were handled entirely by ODMs. In the future, the assembly task of a server project will be given to not only an ODM partner but also a system integrator.


What’s the Difference Between Kubernetes and OpenShift?

Red Hat provides automated installation and upgrades for most common public and private clouds, allowing you to update on your own schedule and without disrupting operations. This process is perhaps one of the biggest differentiations between OpenShift and the standard Kubernetes environment, as it provides a runbook for updates and uses this to avoid disruption. If you’re running a cluster of OpenShift servers, you will be able to upgrade while applications continue to run, with OpenShift’s orchestration tools moving nodes and containers as required. When it comes to managed on-premises Kubernetes OpenShift is perhaps best compared with Microsoft’s Azure Arc tooling, which brings Azure’s managed Kubernetes to on-premises, using the Azure Portal as a management tool, or VMware’s Tanzu. They are all based on certified Kubernetes, adding their own management tooling and access control. OpenShift is more a sign of Kubernetes’ importance to enterprise application development than anything else. 


CISO Budget Constraints Drive Consolidation of Security Tools

Piyush Pandey, CEO at Pathlock, a provider of unified access orchestration, says budget constraints will affect both solution purchases, but also potentially the staff required to run them. “This will likely drive the consolidation of solutions that span across multiple organizations, such as access, compliance, and security tools,” he says. “This consolidation into platforms will help organizations prioritize their resources -- time, money, and people.” He says organizations that focus on comprehensive solutions can drive more synergies across different departments to be compliant. “This won't just be about cost savings, however -- it will also help reduce the complexity of their infrastructure, eliminating multiple standalone tools and solutions,” Pandey adds. Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk remediation, explains the global financial downturn has hit multiple sectors, which means budgets are short overall. “The challenge will be keeping cybersecurity postures strong, even in the face of budget cuts,” he says. 



Quote for the day:

"Leadership development is a lifetime journey, not a quick trip." -- John Maxwell