Daily Tech Digest - June 04, 2019

What the Future of Fintech Looks Like

What the Future of Fintech Looks Like
Fintech has been driving huge changes across the financial services sector, but one area that is seeing exponential change is in the ultra-high net-worth individual (UHNWI) space. Crealogix Group, a global market leader in digital banking, has been working with banks across the world on their digital transformation journey for over 20 years, and it is only recently that they are seeing growing momentum in private wealth to digitize. Pascal Wengi, the AsiaPacific managing director of Crealogix, says: “The old ways of servicing these clients through a personal touch is quickly moving to digitally-led platforms, with younger, techsavvy UHNWIs wanting an immediate and comprehensive view of their assets without waiting for a phone call. At the same time, they also want customized solutions catered to their unique financial needs.” Platforms that allow access on both sides—clients, and their advisors, family office teams and accountants..., insists Wengi.



data gravity 1000x630
Data gravity is a metaphor introduced into the IT lexicon by a software engineer named Dave McCrory in a 2010 blog post.1 The idea is that data and applications are attracted to each other, similar to the attraction between objects that is explained by the Law of Gravity. In the current Enterprise Data Analytics context, as datasets grow larger and larger, they become harder and harder to move. So, the data stays put. It’s the gravity — and other things that are attracted to the data, like applications and processing power — that moves to where the data resides. Digital transformation within enterprises — including IT transformation, mobile devices and Internet of things — is creating enormous volumes of data that are all but unmanageable with conventional approaches to analytics. Typically, data analytics platforms and applications live in their own hardware + software stacks, and the data they use resides in direct-attached storage (DAS). Analytics platforms — such as Splunk, Hadoop and TensorFlow — like to own the data. So, data migration becomes a precursor to running analytics.


5 requirements for success with DataOps strategies

For organization who operate at this speed of change, they require modern data architectures that allow for the quick use of the ever-expanding volumes of data. These infrastructures – based on hybrid and multi-cloud for greater efficiency – provide enterprises with the agility they need to compete more effectively, improve customer satisfaction and increase operational efficiencies. When the DataOps methodology is part of these architectures, companies are empowered to support real-time data analytics and collaborative data management approaches while easing the many frustrations associated with access to analytics-ready data. DataOps is a verb not a noun, it is something you do, not something you buy. It is a discipline that involves people, processes and enabling technology. However, as organizations shift to modern analytics and data management platforms in the cloud, you should also take a hard look at your legacy integration technology to make sure that it can support the key DataOps principles that will accelerate time to insight.



An API architect typically performs a high-level project management role within a software development team or organization. Their responsibilities can be extensive and diverse, and a good API architect must combine advanced technical skills with business knowledge and a focus on communication and collaboration. There are often simultaneous API projects, and the API architect must direct the entire portfolio. API architects are planners more than coders. They create and maintain technology roadmaps that align with business needs. For example, an API architect should establish a reference architecture for the organization's service offerings, outlining each one and describing how they work. The architect should define the API's features, as well as its expected security setup, scalability and monetization. The API architect sets best practices, standards and metrics for API use, as well. These guidelines should evolve as mistakes become clear and better options emerge.



Edge-based caching and blockchain-nodes speed up data transmission

Edge-based caching and blockchain-nodes speed up data transmission
Data caches are around now, but Bluzelle claims its system, written in C++ and available on Linux and Docker containers, among other platforms, is faster than others. It further says that if its system and a more traditional cache would connect to the same MySQL database in Virginia, say, their users will get the data three to 16 times faster than a traditional “non-edge-caching” network. Write updates to all Bluzelle nodes around the world takes 875 milliseconds (ms), it says. The company has been concentrating its efforts on gaming, and with a test setup in Virginia, it says it was able to deliver data 33 times faster—at 22ms to Singapore—than a normal, cloud-based data cache. That traditional cache (located near the database) took 727ms in the Bluzelle-published test. In a test to Ireland, it claims 16ms over 223ms using a traditional cache. An algorithm is partly the reason for the gains, the company explains. It “allows the nodes to make decisions and take actions without the need for masternodes,” the company says. Masternodes are the server-like parts of blockchain systems.


Microsoft's Vision For Decentralized Identity

Our digital and physical lives are increasingly linked to the apps, services, and devices we use to access a rich set of experiences. This digital transformation allows us to interact with hundreds of companies and thousands of other users in ways that were previously unimaginable. But identity data has too often been exposed in breaches, affecting our social, professional, and financial lives. Microsoft believes that there’s a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This whitepaper explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations. Today we use our digital identity at work, at home, and across every app, service, and device we engage with. It’s made up of everything we say, do, and experience in our lives—purchasing tickets for an event, checking into a hotel, or even ordering lunch. 


Your 3-minute guide to serverless success
What has propelled the use of serverless? Faster deployment, the simplification and automation of cloudops (also known as “no ops” and “some ops”), integration with emerging devops processes, and some cost advantages. That said, most people who want to use serverless don’t understand how to do it. Many think that you can take traditional on-premises applications and deem them serverless with the drag of a mouse. The reality is much more complex.  Indeed, serverless application development is more likely a fit for net new applications. Even then you need to consider a few things, mainly that you need to design for serverless. Just as you should design for containers and other execution architectures that are optimized by specific design patterns, serverless is no exception. ... The trick to building and deploying applications on serverless systems is understanding what serverless is and how to take full advantage. We have a tendency to apply all of our application architecture experience to all type of development technologies, and that will lead to inefficient use of the technology, which won’t produce the ROI expected—or worse, negative ROI, which is becoming common.


Author Q&A: Chief Joy Officer

Change is hard. We get used to the way we work and we assume it’s just the way it has to be. Inertia is a big deal. Many of us have tried to make changes in our personal life—our health, our financial situation—only to find out we’re stuck in a rut. We know we need to change our behaviors in order to change our outcomes, but changing human behavior is hard. What probably prevents change more than anything is success. If you’re successful enough, then it’s hard to be convinced of the value of change. You’ll say, well, why should we change when we’re already successful? Of course the problem with success is that it is often fleeting. It’s not like you reach a level of success and then automatically stay there. Every organization, every market, and every business ebbs and flows. When it’s flowing awesomely, we figure we don’t need to change. But when it’s ebbing, we get scared—and sometimes that’s the least opportune time to make a change, because fear can cloud our ability to make the best decisions for our organizations or our teams.


Discover practical serverless architecture use cases


A more complete serverless architecture-based system comes into play with the workloads related to video and picture analysis. In this example, serverless computing enables an as-needed workflow to spin up out of a continuous process, and the event-based trigger pulls in an AI service: Images are captured and analyzed on a standard IaaS environment, with events triggering the use of Amazon Rekognition or a similar service to carry out facial recognition when needed. The New York Times used such an approach to create its facial recognition system that used public cameras around New York's Bryant Park. Software teams can also use serverless designs to aid technical security enforcement. Event logs from any device on a user's platform can create triggers that send a command into a serverless environment. The setup kicks off code to identify the root cause for the logged event or a machine learning- or AI-based analysis of the situation on the device. This information, in turn, can trigger what steps to take to rectify issues and protect the overall systems.


It’s time for the IoT to 'optimize for trust'

The research by cloud-based security provider Zscaler found that about 91.5 percent of transactions by internet of things devices took place over plaintext, while 8.5 percent were encrypted with SSL. That means if attackers could intercept the unencrypted traffic, they’d be able to read it and possibly alter it, then deliver it as if it had not been changed. Researchers looked through one month’s worth of enterprise traffic traversing Zscaler’s cloud seeking the digital footprints of IoT devices. It found and analyzed 56 million IoT-device transactions over that time, and identified the type of devices, protocols they used, the servers they communicated with, how often communication went in and out and general IoT traffic patterns. The team tried to find out which devices generate the most traffic and the threats they face. It discovered that 1,015 organizations had at least one IoT device. The most common devices were set-top boxes (52 percent), then smart TVs (17 percent), wearables (8 percent), data-collection terminals (8 percent), printers (7 percent), IP cameras and phones (5 percent) and medical devices (1 percent).



Quote for the day:


"The ability to continuously simplify, while adding more value and removing clutter, is a superpower." -- @ValaAfshar


No comments:

Post a Comment