Daily Tech Digest - April 06, 2019

Artificial intelligence, machine learning and intelligence

Besides processing information in the “classic” way, quantum computers use two specific characteristics of the quantum system, i.e. overlapping – where two or more quantum states can be added together – and entanglement that implies, in a counter-intuitive way, the presence of many remote correlations among all the physical quantum states examined. Hence an availability of data and calculation speeds, enabling to carry out previously unimaginable operations: the analysis of continental climate change; the world economic cycles of raw materials; the number and physical constants of galaxies in space. In the future, there will also be convergence between AI and the Internet of Things, which will make both the construction of vehicles and their driving autonomous. Another short-term integration will be between blockchain technology and Artificial Intelligence. We have often spoken about blockchain, but in this case it is above all the integration between the blockchain “closed” network and a selective data collection or, otherwise, a patented and still secret technology.

Continuous Delivery Foundation seeks smoother CI/CD paths

One reason is enterprises must pick from a large menu of often fragmented tools in the CI/CD market, and then integrate the various tools into their CI/CD pipelines. Among the many tools in the CI/CD landscape are Shippable, CloudBees Jenkins, Atlassian's Bamboo, Bitnami, CircleCI, Travis CI, JetBrains' TeamCity and Microsoft's Azure DevOps Server. Nearly every company also creates software to automate its business processes, so CI/CD tools are in higher demand than ever. Despite some consolidation in the DevOps arena -- JFrog recently acquired Shippable, and CloudBees snapped up Codeship -- enterprises do face a choice: They must integrate several different tools to build their pipelines, or lock into an end-to-end DevOps tools environment with one of the major cloud providers. To help simplify the process, the Linux Foundation formed the Continuous Delivery Foundation (CDF) in mid-March. Among the CDF's founding members, which span open source software, platforms and tools, are the following: Alibaba, Autodesk, Capital One, CircleCI, CloudBees, GitLab, Google, Huawei, IBM, JFrog, Netflix, Puppet, Red Hat and SAP.

The Best Decision: Your Future and Serverless Stream Processing

A streaming data processing structure usually comprises of two layers—a storage layer and a processing layer. The former is responsible for ordering large streams of records and facilitating persistence and accessibility at high speeds. The processing layer takes care of data consumption, executing computations, and notifying the storage layer to get rid of already-processed records. Data processing is done for each record incrementally or by matching over sliding time windows. Processed data is then, subjected to streaming analytics operations and the derived information is used to make context-based decisions. For instance, companies can track public sentiment changes on the products by analyzing social media streams continuously—world's most influential nations can intervene in decisive events like presidential elections in other powerful countries—mobile apps can offer personalized recommendations for products based on geo-location of devices, user emotions.

10 Interesting Facts About Chatbots

People don’t really care if your chatbot has a great personality. Especially if that chatbot can’t solve an issue that one of your customers is currently experiencing. Make sure you focus on utility over personality. Forty-eight percent of respondents in the same LivePerson survey said they prefer chatbots that can solve problems. However, don’t forget about speed! Consumers value friendliness and ease of use the most in chatbots, but speed is a close third, according to Aspect’s research. Speed is more important to consumers than having a successful interaction and even accuracy. ... Facebook has evolved a ton over the years. It’s no longer just a place to keep in touch with friends from school and spy on your ex. Now it’s also a place to buy things. In fact, a new model is taking shape — one where people don’t have to click on a link, leave Facebook to visit a traditional company website, add stuff to a shopping cart, and complete a purchase. That’s because 37 percent of people are open to the idea of buying items on the social network, according to the same HubSpot research. And you can be sure that number will continue to grow as more people are exposed to and adopt chatbots.

5G & Industry 4.0 at Hannover Messe 2019

Woman wearing AR lenses interacting with a robotic arm.
Ericsson sees mobile technology as a new foundation to accelerate and support these new technologies. If factories are to enable digital twins of all processes and workflows, reliable wireless capabilities and low latency are a necessity. With 5G, digital twins can be accessed through remote VR monitoring, supporting transparency in factories. For example, one of our interactive proof points is a virtual tour of FCA Mirafiori plant in Torino where the visitor can “move” within it, monitoring key processes for bottlenecks and machinery parameters like vibration and temperature. We also address the challenges of companies with distributed production sites. A common problem is that similar processes perform differently at different locations. To solve it, plants must break down siloes and introduce transparency to optimize and align these processes. With Fraunhofer IPT, Ericsson presents the 5G Production Cockpit, giving a real-time view of processes in Aachen as well as Stockholm, transmitting live data, creating digital twins. With centralized data and analytics, current as well as historical data are compared for deeper insights.

Form a hybrid integration plan for your architecture

The first challenge is to figure out exactly what your requirements are as an architect. You can have a narrow perspective and focus on hybrid integration in the context of a particular project or initiative, or you can have a holistic perspective. And if you have a holistic perspective, it's hard work to figure out exactly what your integration requirements are today and what they will be in the next, let's say, three to five years, because of all these things happening. The second is selecting the appropriate combination of technologies. Architects would love to have one single [hybrid integration] platform that can cover them all, which can connect IoT devices, mobile devices, APIs, cloud, etc. This is difficult. In the market, there are many [hybrid integration] products, but few are good at supporting all these different scenarios. So, identify what is the right combination of technologies that can be used to solve the problem. ... Sometimes, you cannot put the same platform in the three environments. Maybe on-premises, you have more demanding requirements than in the cloud.

Domain-Oriented Observability

"Observability" has a broad scope, from low-level technical metrics through to high-level business key performance indicators (KPIs). On the technical end of the spectrum, we can track things like memory and CPU utilization, network and disk I/O, thread counts, and garbage collection (GC) pauses. On the other end of the spectrum, our business/domain metrics might track things like cart abandonment rate, session duration, or payment failure rate. Because these higher-level metrics are specific to each system, they usually require hand-rolled instrumentation logic. This is in contrast to lower-level technical instrumentation, which is more generic and often is achieved without much modification to a system's codebase beyond perhaps injecting some sort of monitoring agent at boot time. It's also important to note that higher-level, product-oriented metrics are more valuable because, by definition, they more closely reflect that the system is performing toward its intended business goals. By adding instrumentation that tracks these valuable metrics we achieve Domain-Oriented Observability.

Why Cybersecurity Matters: A Lawyer’s Toolkit

The stark reality is that most attorneys are highly independent and singularly focused on servicing their clients, whether as in-house or outside counsel. The extra steps required to access files and applications with oft hard-to-remember (but more secure) passwords are not always congruous with billable hours and around-the-clock attention to deliverables. Lawyers may compromise on security to ensure direct communications with clients on their platform of choice, in pursuit of the almighty billable hour. Another vulnerability is that attorneys crave information, the more the better. This trait is something that savvy hackers understand and will use to their advantage. Email phishing, in that regard, is a frequent tactic. The smart cyber-villain will quickly learn how to dupe attorneys and their assistants by sending attachments and links by email that appear to come from a legitimate source. Once said attachment is opened — bingo! — the malware starts to execute and do the dirty work behind the scenes, scouring the device for desired data points and eventually securing access to an internal network.

Does IT need Devops Managers?

If you read Agile literature, you’ll realize that the reason Agile in general and Scrum in particular promotes the role of a Scrum “Master’ rather than a Manager is because the latter often oversteps his authority and mandate. The effect on the experts is disastrous. They feel ‘controlled’ with no motivation left to appreciate the overall objective of the venture they’re part of and confine themselves just to do what they’re asked to do. This is the beginning of the ‘silo’ mentality — the very mentality Devops is supposed to eliminate. If you look deeper to understand the silo between Dev and Ops, you’ll observe that the silo gets deeper as you go lower in the hierarchy — it’s not so deep at the Dev and Ops management layer. So, the point is that IT Management need to look themselves in the mirror and assess the degree to which they have contributed to this chasm between Dev and Ops. They need to step back from their command and control approach to a far more ‘watch from a distance and protect’ approach. They need to empower the ‘experts’, allow them to mingle, interact and collaborate, show them the big picture, assert confidence in them to solve the big problems and create a win-win platform.

When should I choose between serverless and microservices?

Microservices are best suited for long-running, complex applications that have significant resource and management requirements. You can migrate an existing monolithic application to microservices, which makes it easier to modularly develop features for the application and deploy it in the cloud. Microservices are also a good choice for building e-commerce sites, as they can retain information throughout a transaction and meet the needs of a 24/7 customer base. On the other hand, serverless functions only execute when needed. Once the execution is over, the computing instance that runs the code decommissions itself. Serverless aligns with applications that are event driven, especially when the events are sporadic and the event processing is not resource-intensive. Serverless is a good choice when developers need to deploy fast and there are minimal application scaling concerns. ... As a rule of thumb, choose serverless computing when you need automatic scaling and lower runtime costs, and choose microservices when you need flexibility and want to migrate a legacy application to a modern architecture.

Quote for the day:

"Learn from the mistakes of others. You can never live long enough to make them all yourself." -- Groucho Marx

No comments:

Post a Comment