Daily Tech Digest - May 31, 2019

How To Identify What Technologies To Invest In For Digital Transformation

How To Identify What Technologies To Invest In For Digital Transformation
There are many aspects of the experience, but if you look at the central pillars of a great experience, it comes down to the acronym “ACT.” The “A” pillar of ACT is anticipation. The platform must anticipate what the customer or employee needs when using the platform. A second pillar, C, reminds that their experience must be complete. The platform should not put the burden of tasks on the customer or employee; it should run the activity to its completion and deliver a satisfying, complete result back to the customer or employee. The third pillar, T, represents the timeliness factor. The experience needs to be performed in a time frame that is relevant and consistent with customer or employee expectations. An example is in sales where the company has 45 minutes (or perhaps two days) to complete the stakeholder’s need. The time is not about response time; it’s about the appropriate amount of time that the individual gives the company to get to a complete answer. It could be seconds, hours or days.




The digital twin is an evolving digital profile of the historical and current behavior of products, assets, or processes and can be used to optimize business performance. Based on cumulative, real- time, real-world data measurements across an array of dimensions, the digital twin depends on connectivity—and the IIoT—to drive functionality. Amid heightened competition, demand pressures, inaccurate capacity assumptions, and a suboptimal production mix, one manufacturing company sought ways to drive operational improvements, accelerate production throughput, and promote speed to market. At the same time, however, the manufacturer was hampered by limited visibility into its machine life cycles, and knew relatively little about resource allocation throughout the facility. To gain deeper insight into its processes—and to be able to simulate how shifts in resources or demand might affect the facility—the manufacturer used sensors to connect its finished goods and implement a digital twin.



How iRobot used data science, cloud, and DevOps

irobot-terra-hero.jpg
The core item in the new design language is the circle in the middle of the robots. The circle represents the history of iRobot, which featured a bevy of round Roomba robots. "The circle is a nod back to the round robots and gives us the ability to be more expansive with geometries," he explains. But iRobot 2.0 also represents the maturation of iRobot. "Innovation at iRobot started back in the early days with a toolkit of robot technology. Innovation was really about market exploration and finding different ways for the toolkit to create value," Angle says. Through that lens, iRobot explored everything from robots for space exploration to toys to industrial cleaning and medical uses. "Our first 10 to 15 years of history is fraught with market exploration," Angle says. Ultimately, iRobot, founded in 1990, narrowed its focus to defense, commercial and consumer markets before focusing solely on home robots. iRobot divested its commercial and its military robot division, which was ultimately acquired by FLIR for $385 million.


The Defining Role of Open Source Software for Managing Digital Data


Open source use is accelerating and driving some of the most exciting ventures of modern IT for data management. It is a catalyst for infusing innovation. For example, Apache Hadoop, Apache Spark, and MongoDB in big data; Android in mobile; OpenStack and Docker in Cloud; AngularJS, Node.js, Eclipse Che, React, among others in web development; Talend and Pimcore in data management; and TensorFlow in Machine learning. Plus, the presence of Linux is now everywhere—in the cloud, the IoT, AI, machine learning, big data, and blockchain. This ongoing adoption trend of open source software, especially in data management, will intensify in the coming time. The capability of open source has a certain edge as it does not restrain IT specialists and data engineers to innovate and make the use of data more pervasive. In my experience, successful data management depends upon on breaking down data silos in the enterprise with a consolidated platform in place for rationalizing old data as well as deploying new data sources across the enterprise.


DevOps security best practices span code creation to compliance


Software security often starts with the codebase. Developers grapple with countless oversights and vulnerabilities, including buffer overflows; authorization bypasses, such as not requiring passwords for critical functions; overlooked hardware vulnerabilities, such as Spectre and Meltdown; and ignored network vulnerabilities, such as OS command or SQL injection. The emergence of APIs for software integration and extensibility opens the door to security vulnerabilities, such as lax authentication and data loss from unencrypted data sniffing. Developers' responsibilities increasingly include security awareness: They must use security best practices to write hardened code from the start and spot potential security weaknesses in others' code.Security is an important part of build testing within the DevOps workflow, so developers should deploy additional tools and services to analyze and evaluate the security posture of each new build.
Chief artificial intelligence officer
The CAIO might not be at the Executive Committee level, but beware the various other departments reaching out to own the role. AI often gets its initial traction through innovation teams – but is then stymied in the transition to broader business ownership. The IT function has many of the requisite technological skills but often struggles to make broader business cases or to deliver on change management. The data team would be a good home for the CAIO, but only if they are operating at the ExCom level: a strong management information (MI) function is a world away from a full AI strategy. Key functions may be strong users of AI  –  digital marketing teams or customer service teams with chatbots, for example  – but they will always be optimising on specific things.  So, who will make a good CAIO? This is a hard role to fill — balancing data science and technology skills with broader business change management experience is a fine line. Ultimately it will be circumstances that dictate where the balance should be struck. Factors include the broader team mix and the budget available, but above all the nature of the key questions that the business faces.


Researcher Describes Docker Vulnerability

Researcher Describes Docker Vulnerability
Containers, which have grown in popularity with developers over the last several years, are a standardized way to package application code, configurations and dependencies into what's known as an object, according to Amazon Web Services. The flaw that Sarai describes is part of Docker's FollowSymlinkInScope function, which is typically used to resolve file paths within containers. Instead, Sarai found that this particular symlink function is subject to a time-to-check-time-to-use, or TOCTOU, bug. ... But a bug can occur that allows an attacker to modify these resource paths after resolution but before the assigned program starts operating on the resource. This allows the attack to change the path after the verifications process, thus bypassing the security checks, security researchers say. "If attackers can modify a resource between when the program accesses it for its check and when it finally uses it, then they can do things like read or modify data, escalate privileges, or change program behavior," Kelly Shortridge, vice president of product strategy at Capsule8, a security company that focuses on containers, writes in a blog about the this Docker vulnerability.


JDBC vs. ODBC: What's the difference between these APIs?

Many people associate ODBC with Microsoft because Microsoft integrates ODBC connectors right into its operating system. Furthermore, Microsoft has always promoted Microsoft Access as an ODBC-compliant database. In reality, the ODBC specification is based upon the Open Group's Call Level Interface specification, and is supported by a variety of vendors. The JDBC specification is owned by Oracle and is part of the Java API. Evolution of the JDBC API, however, is driven by the open and collaborative JCP and Java Specification Requests. So while Oracle oversees the API development, progress is largely driven by the user community. Despite the separate development paths of ODBC and JDBC, both allow support of various, agreed-upon specifications by RDBMS vendors. These standards are set by the International Standards Organization's data management and interchange committee, and both JDBC and ODBC vendors work to maintain compliance with the latest ISO specification. 


LinkedIn Talent Solutions: 10 tips for hiring your perfect match

Best practices for hiring and recruiting on LinkedIn
The product uses AI to recommend relevant candidates that could be a good fit for an available role, and it leverages analytics to make recommendations in real time as you’re crafting your job description. LinkedIn Recruiter and Jobs also allows companies to target open roles using LinkedIn Ads to reach relevant candidates. In the new Recruiter and Jobs, talent professionals no longer have to jump back and forth between Recruiter and Jobs; the update puts search leads and job applicants for an open role within the same project, viewable on a single dashboard. Candidates can then be saved to your Pipeline, where they’ll move through the later stages of the hiring process. ... Finally, LinkedIn Pages allows organizations of any size to showcase their unique culture and employee experience by posting employee-created content, videos and photos. Candidates can visit and organization’s page to see what your organization has to offer, as well as get personalized job recommendations and connect with employees like them, according to LinkedIn. Real-time page analytics can identify who’s engaging with your organization’s page and which content is making the greatest impact.


Sidecar Design Pattern in Your Microservices Ecosystem

Segregating the functionalities of an application into a separate process can be viewed as a Sidecar pattern. The sidecar design pattern allows you to add a number of capabilities to your application without additional configuration code for third-party components. As a sidecar is attached to a motorcycle, similarly in software architecture a sidecar is attached to a parent application and extends/enhances its functionalities. A sidecar is loosely coupled with the main application. Let me explain this with an example. Imagine that you have six microservices talking with each other in order to determine the cost of a package. Each microservice needs to have functionalities like observability, monitoring, logging, configuration, circuit breakers, and more. All these functionalities are implemented inside each of these microservices using some industry standard third-party libraries. But, is this not redundant? Does it not increase the overall complexity of your application?



Quote for the day:


"The essential question is not, "How busy are you?" but "What are you busy at?" -- Oprah Winfrey


No comments:

Post a Comment