Daily Tech Digest - June 04, 2020

Machine learning will transform the banking sector

The unified approach that AI provides allows institutions to see the financial status of a customer across multiple accounts in an instant. This information combines with the previous products that got contracted, transaction histories, and individual interactions to ensure that personalized services are always available. This technology redefines the definition of customization. Instead of requiring a customer to complete a series of questionnaires or surveys to match up specific datapoints to potential products of interest, machine learning automates this process by looking at all of the activities of the consumer throughout that person’s history with the organization. It can even pull information from the news or social media posts to determine the viability of an offer before one gets requested. That makes the predictive mechanisms more accurate, speeding up the time it takes for someone to complete the processes needed to access something new. As improvements to data warehousing and information processing continue developing, the existing machine learning models will use specific financial profiles that the technology develops internally to create unique initiatives that encourage increases in customer interactions.


Survey: Security Concerns Slow Down IoT Deployments

Although security requirements may mean IoT projects will take longer, it also means that enterprises are cognizant of how a growing number of devices on networks introduces new attack vectors. "The risks with IoT potentially increase because of the diversity of deployments and technologies in IoT networks - general enterprise security is much more standardized and so easier to deploy and keep updated," says Alexandra Rehak, chief analyst with the London-based consultancy Omdia and head of its internet of things practice. The Omdia-Syniverse IoT Enterprise Survey polled 200 enterprises between January and March in North America and Europe that have deployed IoT devices. It was commissioned by Syniverse, which offers private networks for fleets of IoT devices. Of all enterprises polled, 86% reported that IoT projects were delayed or constrained by security. The survey covered companies in healthcare, financial services, manufacturing, retail and hospitality and transportation. Their concerns over security vary. For example, the manufacturing industry is most worried about unauthorized devices joining the network. Healthcare and finance rank regulatory and compliance concerns high.


Layoffs rock the IT services industry amid move to the cloud

So what are we to make of all this? The obvious culprit is the move to the cloud. Enterprises don’t need to hire as many consultants making six-figure salaries when AWS or Microsoft is handling half of your IT load. But DXC said it wasn’t the cloud, it was its own bloated hierarchy, which is quite an admission. “There is no doubt that these big consulting firms are having to pivot because of the cloud and its hitting them in the bottom line,” says Joshua Greenbaum, president of Enterprise Application Consulting, an independent consultancy. “The quarantine and emergency has accelerated plans to move to the cloud and put the brakes on projects that would have been lucrative. The combination of the two means a lot of stuff is put on hold.” Whether those jobs come back is questionable. Not helping matters is machine-learning software like the recently announced SAP Cloud ALM that automates a lot of the basic work of cloud lifecycle management. There’s no doubt the basic, low-level services of getting data centers up and running stuff doesn’t come back. There will be a shift. At the end of the day, complexity is king in enterprise software. What is made easy today is made more complex tomorrow and that will need more skills.


What are you tuning for?

Are you tuning to improve your SQL Server licensing footprint? You can tune for CPU reduction in repetitive queries. You can index and statistics tune to make certain queries faster and be more efficient, which in turn reduces the CPU, memory, and storage thrash while the commands are executing. You might even be able to use less of the things that SQL Server licensing is based on, namely CPUs. Are you tuning for end-user productivity? Can you quantify the pain points the users are ‘feeling’ each and every day? Can you pinpoint the database commands that are underneath those application features? Are they database-driven, or is it more application data handling that is slowing down the function? Maybe the volume of data that the business accesses daily is so high that all-flash storage is the biggest gain you can make. What if faster CPUs, and not just more cores, would get your users a larger bang for the buck? Are your repetitive queries optimal? Can you even access the commands to tune them, such as queries underneath third-party applications? What if tuning for end-user productivity meant increasing parallelism or adding to your licensing footprint? 


Constructing the future for engineering – finding the right model where one size does not fit all

If anything goes wrong or needs adjustment, there is no “back to the drawing board” any more. In fact, not for a long while. It’s all about accessing the right type of data at the right stage in the process, meaning that all of these stages have to be completely interlinked. Indeed, their success depends on constant collaboration and communication between the various people engaged in carrying out their individual activities, who may be located virtually anywhere in the world. Many of the modern world’s most famous engineering projects could only have been realised by bringing together talent from around the globe with a multitude of different departments and workflows in one extended, virtual team. And it’s not just engineering and design these days – collaboration has to extend to marketing and sales so that marketable and sellable concepts are what is ultimately built and put on sale. Crucially, this information also has to extend in a business-relatable form to boardrooms. Photorealistic rendering of finished products are not just pretty pictures – they are pretty essential. This has been the fundamental model of engineering for the past two decades.


Digital banking is now for everyone, how will you choose to compete?

There have been attempts for traditional banks to break free from their analogue worlds and colonise new digital planets. In 2019, the USA JPMorgan Chase’s neobank Finn, and in 2020, the UK Royal Bank of Scotland’s neobank Bó both failed to establish themselves, despite massive investment.3 Internal politics and competing technical platforms have been cited as potential root causes but these challenges are insignificant compared to the lack of any desirable, differentiated value proposition for customers. Without one, existing customers of the parent bank had no reason to try them, and were most likely internally discouraged to avoid cannibalisation. Potential new customers in underserved segments had no reason to select them. On the other hand, the established neos had unique propositions developed in collaboration with their customers, building trust and engagement, and scaling growth organically. Without a deliberate strategy around differentiation, it is not only traditional banks who will continue to fail at digital, but so will the explosion of independent neos and fintechs. When unattainable feature parity with competitors drives product roadmaps and turns product teams into ‘feature factories’, customers fail to see any 10X factor needed to tip them into using something new.


Tech Disruption In Retail Banking: Australia's Big Banks Hold Their Ground As Tech Takes Center Stage

Implementing technology is a key hurdle for Australia's major banks as they rely on legacy IT systems for their core operations. On the positive side, the underlying technology (such as fiber networks, the New Payments Platform, and 5G) required for innovation is already available in Australia, similar to countries where it is also widely implemented such as Sweden and China. Smaller regional and mutual banks face similar challenges, although the path will likely be easier for mutual banks that use cheaper off-the-shelf IT products and have generally stayed more up to date with core banking system upgrades than their major bank peers. Australia's network infrastructure is comprehensive and sufficient to meet the data needs of imminent technological developments; over 99% of Australia's population has mobile broadband access, including in remote areas. We believe cloud migration and adopting a microservices software architecture style will be key to banks' future operating performance in all banking systems, including Australia. Cloud-based systems significantly improve system stability and lower infrastructure costs. Flexible system architecture increases the rate at which banks can update their systems to meet changing consumer needs, while also facilitating connectivity between banks and fintechs through easier application program interface (API) integration.



Predicting the Future with Forecasting and Agile Metrics

There are three important factors that have a much higher impact on lead time than the story size, and that when left unmanaged make our teams unpredictable. First, do we have a high amount of work in progress (WIP)? When we work on too many things at the same time we are not able to focus on finishing the tasks that are already in progress. We waste time in context switching, the quality of our work decreases, and even stories that appear to be simple end up taking longer than expected. Second, how long does work spend in queues between activities? Very often in our processes there is some waiting time between one activity and another (for example, waiting for a developer to be free to start a story, waiting for the next release, etc). These queues are often invisible, they’re not represented on our boards, and it’s really common to ignore them when we estimate, as we only tend to consider the active time that we’re going to be working on something. When these queues are not managed they lead to a lot of work in progress put on hold, which in turn leads to high unpredictability.


Researchers Disclose 2 Critical Vulnerabilities in SAP ASE

The former vulnerability refers to the database software failing to perform the necessary validation checks for an authenticated user while executing "dump" or "load" commands that can be exploited by a malicious actor to allow arbitrary code execution or code Injection, according to the National Vulnerability Database description. "On the next backup server restart, the corruption of configuration file will be detected by the server and it will replace the configuration with the default one. And the default configuration allows anyone to connect to the backup server using the sa login and an empty password," Rakhmanov says. "The problem is that the password to log into the helper database is in a configuration file that is readable by everyone on Windows." CVE-2020-6252 affects only the Windows version of SAP ASE 16 with Cockpit. The problem here is the password to log into the helper database is in a configuration file that is readable by everyone on Windows. This means any valid Windows user can take the file and then recover the password. Then, they are able to log into the SQL Anywhere database as the special user "utility_db" and begin to issue commands and possibly execute code with local system privileges, Rakhmanov writes.


Serverless in the Enterprise: Building Stateful Applications

Cloud native applications allow enterprises to design, build, deploy and manage monolithic applications in more agile, nimble ways. These applications accelerate business value while driving greater operational efficiencies and cost savings through containers, a pay-as-you-go model, and a distributed runtime. However, current serverless implementations (namely, Function-as-a-Service, or FaaS for short) are unable to fully manage business logic and state in a distributed cloud native solution, which creates inefficiencies in hyperscale applications. What is required is a “stateful” approach to serverless application design. ... Unfortunately, a lot of enterprise use cases need to be stateful — such as long-running workflows, human approved processes, and e-commerce shopping cart applications. Workflows, in general, require some sort of state associated with them. Pure serverless functions can’t provide that, since they exist for short durations. Obtaining the application state is most commonly solved by either frequenting database access or saving the state at the client. But both are bad ideas from a security perspective, as well as from the perspective of scaling the database instances.



Quote for the day:

"If you can't embrace, absorb, and integrate new tools quickly, the industry will evolve and pass you by." - Brian Dawson

No comments:

Post a Comment