Organisations must find a way to reduce their technical debt, by replacing tight couplings with a more flexible integration layer. As such, API strategies are becoming more important than ever. APIs create a loose coupling between applications, data, and devices, so organisations can make changes quickly without impacting their existing integrations or the functionality of digital services. It therefore becomes easier to accelerate innovation and deliver new products and services faster, without increasing the risk of business disruption or spiralling costs. One organisation putting this into practice is Allica Bank, a new, digital-only bank that exclusively caters to SMEs. Rather than build its offerings around one core platform as traditional banks do, Allica is built around a more flexible integration layer, underpinned by APIs. When it needs to expose the data from a certain application or system, it does so via an API, without the need to write any code to connect the systems in question. This makes for a much more agile operation, as new services can be switched in and out as needed. For Allica, this level of agility has been critical to its ability to meet its customers’ needs for urgent access to credit in 2020.
As financial firms get more comfortable with machine learning in their most advanced departments, they’ll start to adapt it in other areas to deal with the vast treasure trove of structured and unstructured data pouring into their data lakes. Whether that’s trying to give customers better answers when they call with questions, or quickly figuring out whether someone is qualified for a loan, machine learning will seep into every aspect of the financial enterprise. It will also revolutionize the areas where it’s already dominate, trading and fraud. None of this comes without risks though. Rule bases systems are at least easier to understand. People can inspect and interpret hand-coded rules but with machine learning the systems are more opaque and we don’t always know why a machine made the decision it made. Even worse, as governments take their first stabs at regulation, it’s clear from early drafts of bills in the EU, that regulators don’t fully understand how machine learning models work and they’ve drafted vaguely worded bills that will be open to interpretation and create additional compliance complexity.
Data ownership is a complex concept. If I have data about you in my database, who owns that data? Does it depend on what kind of data it is? For example, if I know you just bought a new boat, can I sell that information? What if I know you were just diagnosed with cancer? According to a 2018 survey, 90% of respondents believe it is unethical to share data about them without their consent, highlighting growing concerns surrounding data control and ownership. Bearing this in mind, and recognizing the importance of building citizen trust, some governments have begun to establish frameworks to give citizens greater control over their data. For instance, in January 2020, Indonesia’s government submitted a bill to parliament that would require explicit consent to distribute personal data such as name, nationality, religion, sexual orientation, or medical records. Violators could face up to seven years in jail for sharing citizen data without consent. Another governance approach is shown by the UK National Health Service (NHS). In the COVID-19 app of the UK NHS, the Department of Health and Social Care, NHS England, and NHS Improvement are the designated data controllers.
There is a reason why this is a requirement to become one of the most successful. Security defenders need to be 100% perfect at protecting 100% of the countless entry points 100% of the time in order to prevent breaches, while on the other hand, hackers only need one exploit that works. While that adage is considerably oversimplified, the moral is true: Being a defender means keeping up with an impossible firehose of changing technologies, controls, and attacks. Not to mention, your advisories are not pieces of code – they are creative and motivated people. And let’s be honest, hacking is fun! When you are engaged in something fun, you likely have heightened motivation and creativity, so only those who approach the challenge of defense work with the same level of play and creativity as hackers will rise to the top of their team, company, and industry. The reflections of this “playful” approach can be seen in quotes from some of the most famous contemporary artists of today. “When someone sees one of my paintings, I want them to really feel the place that I’m depicting. And so, my desire is that they’re going to want to travel into that painting and become part of it.” – James Colema
Interestingly, the launch of its new low-code automation comes when enterprises are looking for quick solutions to deploy AI-powered applications and smooth workflow automation across departments with limited resources and agile processes. Today, low-code, no-code technology platforms have emerged as a go-to model for businesses. Several players, including Appian, Microsoft, Amazon, Pega, and ServiceNow are working on products and ideas to ease the burden for enterprises. In India, companies like Infosys, HCL Technologies and Tech Mahindra, alongside various startups, are also working on this technology. “This is the time for low-code automation platforms,” said Matt Calkins, Appian founder and CEO. “We have just started a new decade, but low-code is how applications are built in the future. It’s inevitable,” said Calkins. A cloud-based, no-code application development platform Quixy’s CEO Gautam Nimmagadda told AIM that no-code would allow more companies to participate in software development, allowing professional developers to focus on advanced and specialised areas.
AIOps offers organizations the potential to improve IT team productivity and cost while fortifying overall business stability and resilience. The technology also supplies the ability to gain deep insights on customer experiences and journeys. "AIOps can bring predictive abilities to operations so organizations are able to adjust to changes," Velayudham said. "By automating the mundane work and uncovering insights from large datasets that aren’t possible to sift through manually, AIOps can increase IT team efficiency," he added. By taking a strategic and intelligent approach to IT automation, businesses can also accelerate their digital transformation efforts. "IT automation can also eliminate repetitive manual tasks, freeing up your IT team to address more strategic tasks, making the entire team more valuable to the business," Mirani said. The AIOps vendor field is growing rapidly. This fact should help ease AIOps adoption, but it's also creating some confusion for potential customers as they find themselves sorting through various tools and approaches.
Kube-monkey is a version of Netflix’s famous (in IT circles, at least) Chaos Monkey, designed specifically to test Kubernetes clusters. Chaos Monkey essentially asks: “What happens to our application if this machine fails?” It does this by randomly terminating production VMs and containers. As a manifestation of the broader discipline of chaos engineering, the core idea behind the open source tool is to foster resilient, fault-tolerant applications by treating failure as a given in any environment. ... Kubernetes has lots of native security controls that require proper configuration and fine-tuning over time. The community commitment to the platform’s security has also led to the creation of various commercial and open source tools for further ensuring the security of your applications and environment. Kube-hunter is a good example: it’s an open source tool for pen-testing your cluster and its nodes. Basically, penetration testing is to security what chaos testing is to resiliency. By assuming that you have weaknesses that an attacker can exploit (because you almost certainly do), you more proactively build security into your systems. You’re attacking yourself to discover holes before someone else does.
The negatives are really linked to bias. That’s why I always talk about bias and trust interchangeably. Because if I’m overtrusting these systems and these systems are making decisions that have different outcomes for different groups of individuals—say, a medical diagnosis system has differences between women versus men—we’re now creating systems that augment the inequities we currently have. That’s a problem. And when you link it to things that are tied to health or transportation, both of which can lead to life-or-death situations, a bad decision can actually lead to something you can’t recover from. So we really have to fix it. The positives are that automated systems are better than people in general. I think they can be even better, but I personally would rather interact with an AI system in some situations than certain humans in other situations. Like, I know it has some issues, but give me the AI. Give me the robot. They have more data; they are more accurate. Especially if you have a novice person. It’s a better outcome. It just might be that the outcome isn’t equal.
You need a distinct strategy for testing microservices as they go behind a distinct architecture and have several integrations with other microservices within the individual organizations and from the outside world (third-party integrations). Moreover, these necessitate a huge amount of collaboration among various squads or teams developing independent microservices. Additionally, they are independent purpose services and are deployed separately as well as regularly. As we have seen benefits of microservices in brief, it also own complicated challenges to cater to. As manifold services are interrelating with each other with REST-based endpoints, the performance deprivation can bang a business to sink. For instance, an eCommerce app with 100ms shaved off on its shopping cart or product listings can straight influence the bottom line of order placement. Otherwise, for an event-driven product with frequent contact between customers, even a hindrance of a few milliseconds can annoy the client and could cause them to go somewhere else. Whatever the situation may be, reliability and performance is the significant element of software development, so businesses must spend the necessary effort time and into performance tests.
AppSec teams are charged with making sure software is safe. Yet, as the industry's productivity multiplied, AppSec experienced shortages in resources to cover basics like penetration testing and threat modeling. The AppSec community developed useful methodologies and tools — but outnumbered 100 to 1 by developers, AppSec simply cannot cover it all. Software security is a highly complex process built upon layers of time-consuming, detail-oriented tasks. To move forward, AppSec must develop its own approach to organize, prioritize, measure, and scale its activity. Agile approaches and tools emerged from recognizing the limitations of longstanding approaches to software development. However, AppSec's differences mean it can't simply copy software development. For example, bringing automated testing into CI/CD might overlook significant things. First, every asset delivered outside CI/CD will remain untested and require alternative AppSec processes, potentially leading to unmanaged risk and shadow assets. Second, when developers question the quality of a report, it creates friction between engineers and security, jeopardizing healthy cooperation.
Quote for the day:
“Make your team feel respected, empowered and genuinely excited about the company’s mission.” -- Tim Westergren