Daily Tech Digest - January 08, 2020

Why data and analytics is so significant for Wells Fargo

Why data and analytics is so significant for Wells Fargo
Enterprise analytics is brought in the company firstly to offer better experience and secondly, as there have been lots of advancements in AI and machine learning, Wells Fargo wanted to create a centre of excellence to make sure that it is bringing the “latest and greatest” into the bank. In order to do that, it is looking into the machine learning use cases. “The first step was to create what we call an Artificial Intelligence Program Bank. It comprised of three different teams that were put together to do this. The first team is the business team, which is part of our innovation team and their mandate was to identify the big use cases that we want to go after and what are the big focus areas, and to figure out the areas that they want to understand and see where they can apply AI and machine learning. The second team was my team, which is all about data and data science. We ensure that we bring the right data, identify the problems, and then make sure that we have the right team members to be able to do the model development. The third team in the group was related to technology. We decided to bring these three groups together, and drive forward the application of AI in the bank,” informs Thota.



Jupyter Notebooks is a popular tool for data-science pros, allowing them to create and share code, visualizations, and other information useful in notebooks.  Microsoft enabled Jupyter notebooks native editing in VS Code in its October release of the Python extension, allowing data scientists to manage source control, open multiple files, and use the auto code-filling feature IntelliSense. In the January release, VS Code Python extension users can now see the current kernel that the notebook is using as well as the status of the kernel, such as whether it is idle or busy. Users can change to other Python kernels from the VS Code kernel selector. Microsoft also promises that this release brings performance improvements for Jupyter in VS Code in both the Notebook editor and the Interactive Window. The improvements are the result of caching previous kernels and optimizing the search for Jupyter, according to Microsoft. Microsoft says the initial start of the Jupyter server is faster and that subsequent starts are more than twice as fast. Users should experience a noticeably faster process for creating a new blank Jupyter notebook and when opening Jupyter Notebooks with a large file size.


Why Analytics Alone is No Longer Enough


Self-service analytics has been on the agenda for a long time, and has brought answers closer to the business users, enabled by “modern BI” technology. That same agility hasn’t happened on the data management side – until now. “DataOps” has come onto the scene as an automated, process-oriented methodology aimed at improving the quality and reducing the cycle time of data management for analytics. It focuses on continuous delivery and does this by leveraging on-demand IT resources and automating test and deployment of data. Technology like real-time data integration, change data capture (CDC) and streaming data pipelines are the enablers. ... Demand for data catalogues is soaring as organizations continue to struggle with finding, inventorying and synthesizing vastly distributed and diverse data assets. In 2020, we’ll see more AI infused metadata catalogues that will help shift this gargantuan task from manual and passive to active, adaptive and changing. This will be the connective tissue and governance for the agility that DataOps and self-service analytics provides.



Evaluating data loss impact in the cloud

Evaluating data loss impact in the cloud image
Following a data loss incident, organisations can see a decline in the value of competitively differentiating assets. The value of individual data sets within large organisations is something that should be assessed and measured by individual data owners within each team (engineering, product, marketing, HR etc). These data owners understand the life cycle, value, and use of their specific data and should be working in collaboration with the information security team to ensure appropriate risk practices are followed. For cloud, in addition to the data itself, competitive advantage components may include algorithms tuned by the data for business intelligence and data analytics purposes. ... The scale of reputational damage depends on the organisational business model, the details of any incident, and on the category of data itself. Customer data loss can lead to long-term reputational damage, especially if the organisation has been clearly critiqued for poor organisational and technical controls in protecting the data.


Amid privacy and security failures, digital IDs advance

blockchain bitcoin circuitry global
Self-sovereign identity envisions consumers and businesses eventually taking control of their identifying information on electronic devices and online, enabling them to provide validation of credentials without relying on a central repository, as is done now. Self-sovereign identity technology also takes the reins away from the centralized ID repositories held by the social networks, banking institutions and government agencies. A person’s credentials would be held in an encrypted digital wallet for documenting trusted relationships with the government, banks, employers, schools and other institutions. But it’s important to note that self-sovereign ID systems are not self-certifying. The onus on whom to trust depends on the other party. Whoever you present your digital ID to has to decide whether the credentials in it are acceptable. "For example, If I apply for a job…, and they require me to prove I graduated from a specific school and need to see my diploma, I can present that in digital form." said Ali.


Security Think Tank: Hero or villain? Creating a no-blame culture

In the general business IT world, all too often the end-user is identified as the point of blame for an intrusion, resulting in a culture of fear with people afraid to report anything suspicious, especially if they have clicked on a link they shouldn’t have. If there is one thing we should have learned, it is that nobody is immune to social engineering. There are numerous examples of security experts and senior managers of security companies being duped, so we must accept it is going to happen. Just as in the aviation example, this comes down to education and appropriate reporting mechanisms. Reporting must be easy, quick and provide positive feedback. Ideally, for phishing emails there should be a button to click to send the suspicious email to an automated analysis, which gives the user instant feedback on whether the email was safe or not and which automatically alerts the security operations team of any unsafe email. For other suspicious activity, feedback could be via a web portal linked to a ticketing system.


An autonomous, laser-guided mosquito eradication machine

bzigopointer.png
A three-year-old startup called Bzigo is developing a device that accurately detects and locates mosquitoes. Once a mosquito is detected, the device sends a smartphone notification while the mosquito is marked by a laser pointer. ... An autonomous laser marker that keeps a bead on the bloodsuckers might just even the playing field. This might strike you as kind of a silly idea, but the tech behind it is pretty intriguing. The device is composed of an infrared LED, a hi-res wide angle camera, custom optics, and a processor. The innovation lies in several computer vision algorithms that can differentiate between a mosquito and other pixel-size signals (such as dust or sensor noise) by analyzing their movement patterns. A wide covering patent on the device and its technologies has been recently approved giving Bzigo a leg up in the high stakes world of mosquito sport hunting. bIt's also worth noting that Bzigo is hardly the first company to try to build a better mosquito solution using technology.


How to Build a Microservices Architecture With Node.Js to Achieve Scale?

microservices with node.js
Building real-world applications in the JavaScript language requires dynamic programming, where the size of the JavaScript application grows uncontrollably. New features and updates get released, and you need to fix bugs to maintain the code. To execute this, new developers need to be added to the project, which becomes complicated. The structure of modules and packages is unable to downsize and simplify the application. To run the application smoothly, it is essential to convert the large, homogeneous structure into small, independent pieces of programs. Such complexities can be resolved effortlessly when JavaScript applications are built on microservices, more so with the Node.js ecosystem. In software application development, microservices are a style of service-oriented architecture (SOA) where the app is structured on an assembly of interconnected services. With microservices, the application architecture is built with lightweight protocols. The services are finely seeded in the architecture. Microservices disintegrate the app into smaller services and enable improved modularity.


Passive optical LAN: Its day is dawning

4 catastrophe vulnerable disaster fiber optic cables
The concept of using passive optical LANs in enterprise campuses has been around for years, but hasn’t taken off because most businesses consider all-fiber networks to be overkill for their needs. I’ve followed this market for the better part of two decades, and now I believe we’re on the cusp of seeing POL go mainstream, starting in certain verticals. The primary driver of change from copper to optical is that the demands on the network have evolved. Every company now considers its network to be business critical where just a few years ago, it was considered best effort in nature. Downtime or a congested network meant inconvenienced users, but today they mean the business is likely losing big money. ... The early adopters of POL are companies that are highly distributed with large campuses and need to get more network services in more places. This includes manufacturing organizations, universities, hospitality, cities and airports. Although I’ve highlighted a few verticals, the fact is that any business can take advantage of POL.


Decision Strategies for a Micro Frontends Architecture

To define a micro frontend, we must first identify it — is it a horizontal or vertical split? In a horizontal split, different teams work on different frontends, but in the same view. This means they have to coordinate in the composition of a page. In a vertical split, each team is responsible for a specific view which they are in full control of. Mezzalira thinks that adhering to a vertical slicing in general simplifies many of the coming decisions. It minimizes the coordination between teams, and he believes it’s closer to how a frontend developer is used to work. In an earlier blog post, he described how to identify micro frontends in more detail. When composing micro frontends, there are three options: client-side, edge-side and server-side composition. Client-side means implementing a technique like an application shell for loading single page applications. In edge-side, Edge Side Includes (ESI), or some similar technique on or near the edge is used.



Quote for the day:


"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis


No comments:

Post a Comment