Daily Tech Digest - April 22, 2019

Fujitsu completes design of exascale supercomputer, promises to productize it

Fujitsu completes design of exascale supercomputer, post-K supercomputer
The new system, dubbed “Post-K,” will feature an Arm-based processor called A64FX, a high-performance CPU developed by Fujitsu, designed for exascale systems. The chip is based off the Arm8 design, which is popular in smartphones, with 48 cores plus four “assistant” cores and the ability to access up to 32GB of memory per chip. A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), an instruction set specifically designed for Arm-based supercomputers. Fujitsu claims A64FX will offer a peak double precision (64-bit) floating point operations performance of over 2.7 teraflops per chip. The system will have one CPU per node and 384 nodes per rack. That comes out to one petaflop per rack. Contrast that with Summit, the top supercomputer in the world built by IBM for the Oak Ridge National Laboratory using IBM Power9 processors and Nvidia GPUs. A Summit rack has a peak computer of 864 teraflops.



As attacks get worse and more commonplace, it noted that companies need cybersecurity professionals more and more. But because of a perfect storm of scarce skills and high demand, security jobs come with a high salary, meaning that businesses not only struggle to find the right people, they have to pay top-dollar to get them. All of that means that cyber-criminals are having a field day, as the article illustrates. Attackers take advantage of ill-prepared companies, knowing that they are likely to be successful. It’s clear that the industry does need to improve, for the sake of customers and businesses alike. And to do that, we need good people, with the right skills. The industry has known for a while that those people are not easy to come by – there are simply not enough of them. There are a lot of reasons for that shortage, and it’s worth bearing in mind that it’s not the easiest industry to work in; the stress of the work means that mental health issues are rife.


Node.js vs. PHP: An epic battle for developer mindshare

PHP vs. Node.js: An epic battle for developer mind share
Suddenly, there was no need to use PHP to build the next generation of server stacks. One language was all it took to build Node.js and the frameworks running on the client. “JavaScript everywhere” became the mantra for some. Since that discovery, JavaScript has exploded. Node.js developers can now choose between an ever-expanding collection of excellent frameworks and scaffolding: React, Vue, Express, Angular, Meteor, and more. The list is long and the biggest problem is choosing between excellent options. Some look at the boom in Node.js as proof that JavaScript is decisively winning, and there is plenty of raw data to bolster that view. GitHub reports that JavaScript is the most popular language in its collection of repositories, and JavaScript’s kissing cousin, TypeScript, is rapidly growing too. Many of the coolest projects are written in JavaScript and many of the most popular hashtags refer to it. PHP, in the meantime, has slipped from third place to fourth in this ranking and it’s probably slipped even more in the count of press releases, product rollouts, and other heavily marketed moments.


Network analytics tools take monitoring to the next level

These tools help to identify problems as well as assist with capacity planning. Common tools include Simple Network Management Protocol (SNMP), syslog and Cisco NetFlow. While these tools provide some great information, they're siloed systems that work independently from one another. So, to perform any deep investigative work needed to determine the root cause of a particularly tricky network performance issue, IT staff would waste hours bouncing between tools. Modern network analytics tools provide a remedy to this time-consuming and complicated process. Network analytics software draws on traditional monitoring protocols and methods and then adds more sophisticated data flow collection methods. All collected data is then analyzed in real time using AI. By combining all data sources, the analytics platform can comb through far more information than ever before in order to make accurate network performance conclusions.


A Data Quality Framework for Big Data

A Data Quality Framework for Big Data
Data profiling is a good first step in judging data quality. But it is different for big data than for structured data. Structured methods of column, table, and cross-table profiling can’t easily be applied to big data. Data virtualization tools can create row/column views for some types of big data, where the views can then be profiled using relational techniques. This approach provides useful data content statistics but fails to give a full picture of the shape of the data. Visual profiling shows patterns, exceptions, and anomalies that are helpful in judging big data quality. Most “unstructured” data does have structure, but it is different from relational structure. Visual profiling will help to show the structure of document stores and graph databases, for example. Data samples can then be checked against the inferred structure to find exceptions—perhaps iteratively refining understanding of the underlying structure. Data quality judgment and structural findings should be recorded in a data catalog allowing data consumers to evaluate the usability of the data. With big data, quality must be evaluated as fit for purpose. With analytics, the need for data quality can vary widely by use case. The quality of data used for revenue forecasting, for example, may demand a higher level of accuracy than data used for market segmentation.


Google Expands ML Kit, Adds Smart Reply and Language Identification

In a recent Android blog post, Google announced the release of two new Natural Language Processing (NLP) APIs for ML Kit, a mobile SDK that brings Google Machine Learning capabilities to iOS and Android devices, including Language Identification and Smart Reply. In both cases, Google is providing domain-independent APIs that help developers analyze and generate text, speak and other types of Natural Language text. Both of these APIs are available in the latest version of the ML Kit SDK on iOS (9.0 and higher) and Android (4.1 and higher). ... Smart Reply allows for contextually-aware message response suggestions to be returned within a chat-based application. Using this feature allows for a quick, and accurate, response in a chat session. Gmail users have been using the Smart Reply feature for a couple years now on the mobile and desktop versions of the service. Now developers can include Smart Reply capabilities within their applications.


Can Blockchain Replace EDI In The Supply Chain? header
“Blockchain in B2B integration brings more agility. Today, B2B integration requires that both parties know each other, at least on a technical level, to provide ways to solve issues such as nonrepudiation and acknowledgement,” writes Forrester Research principal analyst Henry Peyret in “The Future of B2B Integration.” “Forrester expects that, in the next three to five years, blockchain technologies could be used to provide additional agility in building dynamic ecosystems.” Although EDI has built a 20-year track record of reliability, the venerable technology’s main weak point is its cost. “If there’s going to be a rationale for replacement, it might just be that blockchain is cheaper,” Fearnley says. But not everyone says the transition from EDI to blockchain is a done deal. “There have been many contenders to overthrow EDI over the years, and none of them have succeeded because EDI does what it does pretty well,” says Simon Ellis, program vice president of supply chain strategies at IDC. He adds, however, “If you can make things more secure and faster, everyone will benefit.”




coffee-cup-java.jpg
Despite that, Oracle stopped providing security updates to Java 8 in January 2019, in an attempt to force organizations into paid licensing agreements. Naturally, running out-of-date, insecure versions of Java is an exceptionally bad idea, presenting a conundrum to IT managers responsible for the deployment of Java applications: Either pay to maintain support for something that was once used for free, or—if even possible—attempt to move an application off of Java entirely. There is a viable third option, however: Using a non-Oracle distribution of Java. Because Java is still fundamentally open source, any organization that wishes to ship its own patched version of OpenJDK can do so. Red Hat—which contributes to Java upstream, and ships a number of their own products built on Java—is doing just that. Red Hat is taking the mantle of OpenJDK maintainer for versions 8 and 11, which will be supported until June 2023 and October 2024, respectively. New features are not expected for either version, as both are essentially in maintenance mode. 




Data center workers happy with their jobs -- despite the heavy demands
Overall satisfaction is pretty good, with 72% of respondents generally agreeing with the statement “I love my current job,” while a third strongly agreed. And 75% agreed with the statement, “If my child, niece or nephew asked, I’d recommend getting into IT.” And there is a feeling of significance among data center workers, with 88% saying they feel they are very important to the success of their employer. That’s despite some challenges, not the least of which is a skills and certification shortage. Survey respondents cite a lack of skills as the biggest area of concern. Only 56% felt they had the training necessary to do their job, and 74% said they had been in the IT industry for more than a decade. The industry offers certification programs, every major IT hardware provider has them, but 61% said they have not completed or renewed certificates in the past 12 months. There are several reasons why. A third (34%) said it was due to a lack of a training budget at their organization, while 24% cited a lack of time, 16% said management doesn’t see a need for training, and 16% cited no training plans within their workplace.




Closing the cyber security gender gap reflects the realities of the larger global cyber environment where there is diversity in gender, politics, social, economic, and culture. The bad guys are not only diverse in their thinking and actions, but also so are potential foreign security partners. As such, different perspectives and experience is a necessary complement to an industry that often hits an obstacle when it comes to language and terminology. More importantly, more inclusion into cyber security starts to tear down antiquated perceptions that the profession is geared toward males. This is almost ironic considering that women have played prominent roles in computers to include programming, designing computer systems to run U.S. census, and the software that supported Apollo 11 missions. Addressing the cultural perception of the cyber security industry is necessary in order to continue to better level employment levels. Part of this requires a review to ensure that compensation levels are equal. According to a 2017 global information security study, women earned less than male counterparts at every level.





Quote for the day:


"Surprise yourself every day with your own courage." -- Denholm Elliott


Daily Tech Digest - April 21, 2019

Blockchain: The Ultimate Disruptor?


What the internet did for the exchange of information, blockchain has the potential to do for the exchange of a digital asset’s value. Right now, many people in the blockchain space are talking about “tokenization,” which breaks down the ownership of an asset into digital tokens to allow wider-scale ownership of that asset. Tokenization started with initial coin offerings (ICOs) and has evolved to securitized token offerings (STOs), which have the potential to unlock the value of trillions of dollars of assets that are currently closed to the average person and make them more accessible. We’re talking about real-estate holdings, private equity, etc. When these assets are tokenized and brought into the market, it could impact the average person and how they do their financial and retirement planning, as well as where and what they choose to invest in. ... Rafia says about the wider access potential, “Currently, private equity, venture capital and other similar investments are not available to retail investors because there are a lot of regulations preventing it, as they tend to be riskier asset classes


Key in changing your enterprise is analyzing the impact of changes and planning those changes in a smart way. We do not advocate a ‘big up-front design’ approach, with huge, rigid multi-year transformation plans. Rather, in an increasingly volatile business world you need to use an interactive approach where your plans are updated regularly to match changing circumstances, typically in an agile manner. The figure below shows a simple example of dependencies between a series of changes, depicted with the pink boxes. A delay in ‘P428’ causes problems in the schedule, since ‘P472’ depends on it. Moreover, since the two changes overlap in scope (shown in the right-hand table), they could potentially be in each other’s way when they also overlap in time. This information is calculated from the combination of project schedule and architecture information, a clear example of the value of integrating this kind of structure and data in a Digital Twin.


How People Are the Real Key to Digital Transformation

An interview with Gerald C. Kane, Anh Nguyen Phillips, Jonathan R. Copulsky, and Garth R. Andrus, the authors of "The Technology Fallacy."
Digital disruption affects all levels of the organization. Our research shows, however, that higher level leaders are generally much more optimistic about how their organization is adapting to that disruption than lower level employees. This result suggests that leaders may be overestimating how well their organization is responding. In the book, we provide a framework by which leaders can survey their employees to gauge how digitally mature their organization is against 23 traits, which we refer to as the organization’s digital DNA. Digital maturity is usually unevenly distributed throughout an organization, and we encourage organizations to use this framework to assess how it is distributed so they can begin to identify and address the areas of improvement that are most likely to yield organizational benefits. ... a single set of organizational characteristics were essential for digital maturity -- accepting risk of failure as a natural part of experimenting with new initiatives, actively implementing initiatives to increase agility, valuing and encouraging experimentation and testing as a means of continuous organizational learning, recognizing and rewarding collaboration across teams and divisions, increasingly organizing around cross-functional project teams, and empowering those teams to act autonomously.


Cachalot DB as a Distributed Cache with Unique Features

The most frequent use-case for a distributed cache is to store objects identified by one or more unique keys. A database contains the persistent data and, when an object is accessed, we first try to get it from the cache and, if not available, load it from the database. Most usually, if the object is loaded from the database, it is also stored in the cache for later use. ... By using this simple algorithm, the cache is progressively filled with data and its “hit ratio” improves over time. This cache usage is usually associated with an “eviction policy” to avoid excessive memory consumption. When a threshold is reached (either in terms of memory usage or object count), some of the objects from the cache are removed. The most frequently used eviction policy is “Least Recently Used” abbreviated LRU. In this case, every time an object is accessed in the cache, its associated timestamp is updated. When eviction is triggered, we remove the objects with the oldest timestamp. Using cachalot as a distributed cache of this type is very easy.


Enterprise Architecture: A Blueprint for Digital Transformation


Enterprise architects have a tough job. They have to think strategically but act tactically. A successful enterprise architect can sit down at the boardroom table and discuss where the business needs to go, then translate “business speak” into technical capabilities on the back end. The key to EA is to always focus on business needs first, then how those needs can be met by applying technology. It comes down to the concept of IQ (intelligence quotient or “raw intelligence”) and EQ (emotional quotient, or “emotional intelligence”). As a recent Forbes article stated, “when it comes to success in business, EQ eats IQ for breakfast.” Good enterprise architects need to have good IQ and EQ. This balance prevents pursuing the latest technology just because it’s cool but instead determining what’s the best way to meet the business need. At the end of the day, an EA should be measured by the business outcomes it’s delivering. Our approach to EA (see below) starts with a business outcome statement, and ends with governance processes to verify we’re achieving those business outcomes and adhering to the EA blueprint.


Crisis Resilience for the Board


Similar to culture oversight, boards are increasingly monitoring company technology activities, from cyber risk to disruption risk to digital transformation. Directors are asking management tough questions about technologies that are vital to the business and whether they are truly protected from the most likely and impactful risks. Beyond protecting data, the board should understand whether management is incorporating resilience into their information technology and cybersecurity strategies. To do so, directors may seek to understand how the most critical data—or that which is most vital to the business’s success—is backed up and protected, both physically and logically. Directors should understand, at a high level, what the most critical data asset sets or capabilities are to the company and the risks posed to them. Additionally, directors should ask management whether it is considering innovative technologies to both protect assets and enable quick recovery in the event of potential loss. ... Directors might also endeavor to learn about leading practices around risk management, crisis management, cyber risk, physical security, succession planning, and culture risk. This could provide a level of comfort with the risks posed to the company, as well as a degree of confidence in the company’s ability to respond.


The Cybersecurity 202: This is the biggest problem with cybersecurity research


“There are a whole lot of possible barriers that will come to the fore if an organization asks their lawyers about it,” Moore said. “It turns out that many of those risks, on deeper inspection, can be mitigated and overcome. But there has to be institutional will to do it.” One irony of this problem is that the cybersecurity community has been hyper-focused on information sharing in recent years — but the focus has been on companies sharing hacking threats from the past day or two so they can guard against them. The government has championed these threat-sharing operations and facilitates them through a set of organizations called information sharing and analysis centers and information sharing and analysis organizations. That sort of sharing has a clear benefit for companies because it helps them defend against threats that may be coming in the next hour or day. But companies have made less progress on sharing longer-range cybersecurity information that can help address more fundamental cybersecurity challenges, Moore said.


The Connection Between Strategy And Enterprise Architecture (Part 3)

Business capabilities connect the business model with the enterprise architecture, which is composed of the organizational structure, processes, and resources that execute the business model. It is a combination of resources, processes, values, technology solutions, and other assets that are used to implement the strategy. ... business capabilities comprise a fundamental building block that enables and supports the business transformation initiatives companies are undertaking to remain relevant in the constantly changing marketplace. Companies that excel in mapping their existing capabilities and creating a road map to close the gap in their future capabilities are most likely to remain ahead of the competition by responding effectively to industry and market dynamics. Therefore, the way we connect the company’s high-level strategic priorities and objectives to the resources, processes, and ultimately the system landscape that execute the strategy is by mapping and modeling the necessary capabilities.


Leading Innovation = Managing Uncertainty

McKinsey Quarterly (2019) - Three Horizons Framework
Uncertainty is the central characteristic of innovation. While generating new ideas and inventing new technologies is important, it is even more important for innovators to identify the unknowns that have to be true for their ideas and technologies to succeed in the market. We can only claim to have succeeded at innovation when we find the right business model to profitably take our idea or technology to market. At the strategy level, there are several frameworks that have been developed to help leaders understand their product and service portfolios and make decisions. These frameworks use different dimensions that hide in plain sight, the real challenge that leaders are facing - i.e. managing uncertainty. ... The McKinsey framework is perhaps the most popular of them all. This framework maps two dimensions of value and time to create three horizons. The nearest horizon is Horizon 1 where we extend the core and generate value for the company straight away. In Horizon 2, we build businesses around new opportunities with potential to impact revenues in the near term. The farthest horizon is Horizon 3 where visionaries work on viable options that will only deliver value to the company after several years.


Cloud Security Architectures: Lifting the Fog from the Cloud

The user behavior analytics (UBA) security solutions oriented primarily to the insider threat have matured and are commonplace. ... This data is crucial for forensics analyses when a major breach has been detected. There isn’t a comparable mature technology yet for the cloud where users have migrated their work. The successful UBA technology teaches security professionals how to properly architect new enterprise systems with users’ cloud behaviors in full view. The core approach for the cloud is to gather data from cloud storage logs, extract features and carefully architect sets of indicators that detect likely breaches. Cloud logs can reveal, for example, when file extensions are changed, what documents are downloaded and to where, whether a document has been downloaded to an anonymous user, and when an unusually high number of documents are downloaded to an odd geolocation. These are all early indicators of potential breach activity.



Quote for the day:


"Leaders need to be optimists. Their vision is beyond the present." -- Rudy Giuliani


Daily Tech Digest - April 20, 2019

How to reconstruct your business’s value chain for the digital world

How to reconstruct your business̢۪s value chain for the digital world
What’s the big advantage of digital? It allows you to disconnect yourself from physical constraints. With uber, you no longer have to be in the street to hail a cab. You can order a cab from anywhere. If you digitize the supply-chain process, you are no longer linking the production of the product to one physical location. In the analog world, a person would check the inventory and write an order for supplies. When there was a spike in demand, that person would call more people and write more orders for more supplies. But in the digital world, you can create a manufacturing process where your inventory, recipes, and prices are all available on a digitized, harmonized ecosystem. When demand spikes, you can turn the dial on your [robotic process automation] RPA tool. When we digitize and harmonize complex business processes, we no longer have to call a guy who orders a part. Instead, you have a view into the inventory across multiple suppliers. The CIO has a unique and critical role in digital transformation, as long as they don’t fall into a few common traps. One such trap is when the CEO throws money at you and tells you, “Bring me this shiny new technology.”


Why Enterprise-Grade Cybersecurity Needs a Federated Architecture

A federated architecture combines the strengths of centralized and distributed and is, therefore, a kind of “best of both worlds” approach. With federated, a controller is placed in each data center or public cloud region (just like distributed), but those multiple controllers act in concert so as to provide the abstraction that there is one centralized controller. All of the controllers in a federated architecture communicate with each other to share information about the organization’s security policy as well as the workloads that are being secured. This type of architecture is the best when it comes to securing global infrastructure at scale. And, as is typically the case when writing enterprise-grade software, making the right architectural choice and then implementing it in an elegant way required our architects and engineers to spend a little more time and be a little more thoughtful. Our ultimate goal was to deliver an enterprise-scale architecture that delivered the benefits of a federated architecture without the downsides of distributed and centralized.


Ready for 6G? How AI will shape the network of the future


Take the problem of coordinating self-driving vehicles through a major city. That’s a significant challenge, given that some 2.7 million vehicles enter a city like New York every day. The self-driving vehicles of the future will need to be aware of their location, their environment and how it is changing, and other road users such as cyclists, pedestrians, and other self-driving vehicles. They will need to negotiate passage through junctions and optimize their route in a way that minimizes journey times. That’s a significant computational challenge. It will require cars to rapidly create on-the-fly networks, for example, as they approach a specific junction—and then abandon them almost instantly. At the same time, they will be part of broader networks calculating routes and journey times and so on. “Interactions will therefore be necessary in vast amounts, to solve large distributed problems where massive connectivity, large data volumes and ultra low-latency beyond those to be offered by 5G networks will be essential,” say Stoica and Abreu.


IT Governance 101: IT Governance for Dummies, Part 2

One of the powerful aspects of COBIT is that it acts as the glue between governance and management, describing both governance and management processes. Its concept of cascading enterprise goals to IT goals to enabler goals and metrics ensures consistent communication and alignment. These enablers such as Processes are where all the IT management frameworks can be plugged in, helping to give the frameworks a business context and ensuring that they focus on delivering value and outcomes, not just outputs. As stated by one expert in the UAE, “I think often because organizations do not do a goals cascade things feel disconnected and orphaned, but once you do a proper goals cascade you can see and feel the interconnection and how goals are interdependent on each other to achieve the enterprise-level goals. ... Clearly, these exploding business demands for new benefits exist and, at the same time, IT is expected to make everything secure, replace all that legacy stuff that is slowing down the Ubering, and stop IT from breaking as well.


Some internet outages predicted for the coming month as '768k Day' approaches

World map globe cyber internet
The good news is that network admins have known about 768k Day for a long time, and many have already prepared, either by replacing old routers with new gear or by making firmware tweaks to allow devices to handle global BGP routing tables that exceed even 768,000 routes. "Yes, TCAM memory settings can be adjusted to help mitigate, and even go beyond 768k routes on some platforms, which will work if you don't run IPv6. These setting changes require a reboot to take effect," Troutman said. "The 768k IPv4 route limit is only a problem if you are taking ALL routes. If you discard or don't accept /24 routes, that eliminates half the total BGP table size. "The organizations that are running older equipment should know this already, and have the configurations in place to limit installed prefixes. It is not difficult," Troutman added. "I have a telco ILEC client that is still running their network quite nicely on old Cisco 6509 SUP-720 gear, and I am familiar with others, too," he said.


Bots Are Coming! Approaches for Testing Conversational Interfaces

When testing such interfaces, natural language is the input and we humans really love having alternatives and love our synonyms and our expressions. Testing in this context moves from pure logic to something close to fuzzy logic and clouds of probabilities. As they are intended to provide a natural interaction, testing conversational interfaces also requires a great deal of empathy and understanding of the human society and ways of interacting. In this area, I would include cultural aspects, including paraverbal aspects of speech (that is all communication happening beside the spoken message, encoded in voice modulation and level). These elements provide an additional level of complexity and many times the person doing the testing work needs to consider such aspects. I believe it’s fair to say that testing a conversational interface can be also be seen as tuning, so that it passes a Turing test. Another challenge faced when testing such interfaces is the distributed architecture of systems.


Protecting smart cities and smart people

spinning globe smart city iot skyscrapers city scape internet digital transformation
For as long as most can remember, information security was a technology concern, handled by technologists, and discussed by security engineers and associated professionals. The security vendors presented at security conferences, the security professional attended accordingly, Cat people with cat people. You know how it goes. Within a Smart city eco- system, we need to extend the cyber conversation beyond the traditional players. How do we make the City Planner appreciate what we understand? How do we share and apply security best practices to an engineering company providing a Building Information Modelling (BIM) service to a Hospital or Defence project? Moreover, how do we, in the first instance highlight the security concerns? Attending and speaking at numerous cyber conferences I sometimes wonder, is this the right audience? In this digital eco-system, we should be speaking to civic and government leaders about our security concerns facing smart cities and critical infrastructure, not exclusively to other security professionals. They are well aware of the challenges and the resistance experienced.


Don't underestimate the power of the fintech revolution

According to Bank of England Governor Mark Carney, FinTech’s potential is to unbundle banking into its core functions - such as settling payments and allocating capital. For central bankers and regulators who are monitoring the sector, the growth of fintech is akin to any other disruptive technology - that is, will it lead to financial instability? Most fintech start-ups are not regulated as much as traditional financial institutions. So far, it’s the more open financial markets that have seen fintech develop rapidly. One example is the e-payment system M-Pesa, which operates in Kenya, Tanzania and elsewhere, and is one of the biggest fintech success stories since its emergence just a decade ago. By effectively transforming mobile phones into payment accounts, M-Pesa has increased financial access for previously unbanked people. The permissive stance of the Kenyan central bank allowed the sector to develop rapidly in one of East Africa’s most developed economies.


Data Breaches in Healthcare Affect More Than Patient Data

Data Breaches in Healthcare Affect More Than Patient Data
Cybercriminals go after any data they perceive to be valuable, says Rebecca Herold, president of Simbus, a privacy and cloud security services firm, and CEO of The Privacy Professor consultancy. "Payroll data contains a wide range of really valuable data that cybercrooks can sell to other crooks for high amounts," she says. "With the growing number of pathways into healthcare systems and networks ... that are being established through employee-owned devices, through third parties/BAs, and through IoT devices, I believe that such fraud is increasing because of the many more opportunities that crooks have now to commit these types of crimes." The recent attacks on Blue Cross of Idaho and Palmetto Health spotlight the importance for healthcare entities to diligently safeguard all data, says former healthcare CISO Mark Johnson of the consultancy LBMC Information Security. The attacks "underscore for me that the healthcare industry needs to protect the entire environment, not just their large systems like the EMR," he says.


Why Your DevOps Is Not Effective: Common Conflicts in the Team

In the DNA of DevOps culture lies the principle of constant and continuous interaction as well as collaboration between different people and departments. The key reason for this is a much greater final efficiency and a much smaller time-to-market compared to the traditional approach. Proper implementation of DevOps shifts the focus from personal effectiveness to team efficiency. At the same time, due to automation and the widespread introduction of monitoring and testing, it is possible to track the occurrence of a problem at the early stages, as well as quickly find the causes of problems. Building the right culture in the organization is important, and it does not depend on DevOps directly: problems occur in all companies, but in an organization, with the right culture all the forces will be thrown at solving the problem and preventing it in the future, rather than searching for the guilty side and punishing.



Quote for the day:


"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor


Daily Tech Digest - April 18, 2019

Automation is a machine and a machine only does what it is told to do. Complicated tests require a lot of preparation and planning and also have certain boundaries. The script then follows the protocol and tests the application accordingly, Ad-hoc testing helps testers to answer questions like, “What happens when I follow X instead of Y?” It helps the tester to think and test using an out of the box approach, which is difficult to program in an automation script. Even visual cross-browser testing needs a manual approach. Instead of depending on an automated script to find out the visual differences, you can check for the issues manually either by testing on real browsers and devices or even better, by using cloud-based, cross-browser testing tools, which allow you to test your website seamlessly across thousands of different browser-device-operating system combinations. ... Having a manual touch throughout the testing procedure instead of depending entirely on automation will ensure that there are no false positives or false negatives as test results after a script is executed.


Understanding the key role of ethics in artificial intelligence

It has become faddish to talk about the important of ethical AI and the need for oversight, transparency, guidelines, diversity, etc., at an abstract and high-level. This is not a bad thing, but often assumes that such ‘talk’ is concomitant in addressing the challenges of ethical AI. The facts, however, are much more complex. For example, guidelines themselves are often ineffective (a recent study showed the ACM’s code of ethics had little effect on the decision making process of engineers). Moreover, even if we agree on how an AI system should behave (not trivial) implementing specific behavior in the context of the complex machinery that underpins AI is extremely challenging. ... Ethics in AI is extremely important given the proliferation of AI systems in consequential areas of our lives: college admissions, financial decision-making systems, and what the news we consume on Facebook and other media sites.


Researchers: Malware Can Be Hidden in Medical Images
The "flaw" discovered in the DICOM file format specification could allow attackers to embed executable code within DICOM files to create a hybrid file that is both a fully functioning Windows executable as well as a specification-compliant DICOM image that can be opened and viewed with any DICOM viewer, the report says. "Such files can function as a typical Windows PE file while maintaining adherence to the DICOM standard and preserving the integrity of the patient information contained within," according to the report. "We've dubbed such files, which intertwine executable malware with patient information, PE/DICOM files." By exploiting this design flaw, the report says, attackers could "take advantage of the abundance and centralization of DICOM imagery within healthcare organizations to increase stealth and more easily distribute their malware, setting the stage for potential evasion techniques and multistage attacks." The fusion of fully functioning executable malware with HIPAA protected patient information adds regulatory complexities and clinical implications to automated malware protection and typical incident response processes, the researchers say.


Sometimes, rather than look at problem areas in the business, he says the team focuses on exploring pure technology. As an example, Chatrain says Generative Adversarial Networks (GANs) can benefit from algorithms that generate fake data, such as fake pictures of people who do not actually exist. “We dedicate part of our exploratory time to such techniques and technologies and then look for applications,” he says. Looking at a practical example of how a fake data algorithm could be deployed, he says: “With GDPR and the need to feed test systems with high volumes of realistic data, we used [synthetic data algorithms] to create fake travellers with travel itineraries.” Such synthetic data is indistinguishable from the data that represents the travel plans of real people, and this data can be used to test the robustness of systems at Amadeus. “Today, no one tests the systems if we have twice as much data,” says Chatrain. But this is possible if data for a vast increase in passenger numbers is simply generated via a synthetic data algorithm. Beyond being used to test application software, he says synthetic data also enables Amadeus to anonymise the data it shares with third parties. “We are not allowed to share [personal] data, but we still need a business partnership.”


What is project portfolio management? Aligning projects to business goals

What is project portfolio management? Aligning projects to business goals
With PPM, not only are project, program, and portfolio professionals able to execute at a detailed level, but they are also able to understand and visualize how project, program, and portfolio management ties to an organization’s vision and mission. PPM fosters big-picture thinking by linking each project milestone and task back to the broader goals of the organization. ... Capacity planning and effectively managing resources is largely dependent on how well your PMO executes its strategy and links the use of resources to company-wide goals. It is no secret that wasted resources is one of the biggest issues that companies encounter when it comes to scope creep. PPM decreases the chances of wasted resources by ensuring resources are allocated based on priority and are being effectively sequenced and wisely leveraged to meet intended goals. ... PMOs that communicate to project teams and other stakeholders, such as employees, why and how project tasks are vital in creating value increase the likelihood of a higher degree of productivity. 


Startup MemVerge combines DRAM and Optane into massive memory pool
Optane memory is designed to sit between high-speed memory and solid-state drives (SSDs) and acts as a cache for the SSD, since it has speed comparable to DRAM but SSD persistence. With Intel’s new Xeon Scalable processors, this can make up to 4.5TB of memory available to a processor. Optane runs in one of two modes: Memory Mode and App Direct Mode. In Memory Mode, the Optane memory functions like regular memory and is not persistent. In App Direct Mode, it functions as the SSD cache but apps don’t natively support it. They need to be tweaked to function properly in Optane memory. As it was explained to me, apps aren’t designed for persistent storage because the data is already in memory on powerup rather than having to load it from storage. So, the app has to know memory doesn’t go away and that it does not need to shuffle data back and forth between storage and memory. Therefore, apps natively don’t work in persistent memory.


crypto currency circuit nodes digital wallet bitcoin blockchain
The group hopes to turn out the first iteration of its Token Taxonomy Framework (TTF) later this year; afterward it plans work to educate the blockchain community and collaborate through structured Token Definition Workshops (TDW) to define new or existing tokens. Once defined, the taxonomy can be used by businesses as a baseline to create blockchain-based applications using digital representations of everything from supply chain goods to non-fungible items such as invoices. "We'll do some workshops...to validate and make sure we have the base definition of a non-fungable token," said Marley Gray, Microsoft's principal architect for Azure blockchain engineering and a member of the EEA's Board of Directors. "As we go through workshops, we will probably find we should add this attribute or this clarification or this example that helps someone understand it." The organizations that have agreed to participate in the standardization effort include Accenture, Banco Santander, Blockchain Research Institute, BNY Mellon, Clearmatics, ConsenSys, Digital Asset, EY, IBM, ING, Intel, J.P. Morgan, Komgo, R3, and Web3 Labs.



Each micro-component runs an independent processing flow that performs a single task. For example, if your application has a network layer, you may also have Network Receiver and Network Sender components which only have the responsibility for receiving/sending data through the network. If your application has a logging layer it might also be implemented as an independent micro-component. Each micro-component defines its own interface of outgoing/incoming events, and the internal processing flow for them. For example, the Network Receiver might define the OutgoingClientRequests channel, which would be populated with newly received requests from the users. Interfaces, as you might guess, are implemented on top of channels, so the communication flows look very obvious, predictable, and easily maintainable in this perspective. The core’s role is to connect various outgoing channels with various incoming channels and to enable data flow between various micro-components.


Cisco Talos details exceptionally dangerous DNS hijacking attack

man in boat surrounded by sharks risk fear decision attack threat by peshkova getty
Talos noted “with high confidence” that these operations are distinctly different and independent from the operations performed by DNSpionage. In that report, Talos said a DNSpionage campaign utilized two fake, malicious websites containing job postings that were used to compromise targets via malicious Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers. In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated Let's Encrypt certificates for the redirected domains. These certificates provide X.509 certificates for Transport Layer Security (TLS) free of charge to the user, Talos said. The Sea Turtle campaign gained initial access either by exploiting known vulnerabilities or by sending spear-phishing emails. Talos said it believes the attackers have exploited multiple known common vulnerabilities and exposures (CVEs) to either gain initial access or to move laterally within an affected organization.


Wipro Detects Phishing Attack: Investigation in Progress

Wipro Detects Phishing Attack:  Investigation in Progress
Wipro's systems were seen being used as jumping-off points for digital phishing expeditions targeting at least a dozen Wipro customer systems, the blog says. "Wipro's customers traced malicious and suspicious network reconnaissance activity back to partner systems that were communicating directly with Wipro's network," according to the blog. In a statement, Wipro says: "Upon learning of the incident, we promptly began an investigation, identified the affected users and took remedial steps to contain and mitigate any potential impact." The firm tells ISMG that none of its customers' credentials have been affected, as was alleged in the blog. Some security experts, however, say Wipro may be the victim of a nation-state sponsored attack. "It is most likely by a nation-state. They use this modus operandi to breach a vendor network first and through that route the attack their customers," says a Bangalore-based security expert, who did not wish to be named. "That is because customers will consider Wipro's network safe.



Quote for the day:


"A good leader leads the people from above them. A great leader leads the people from within them." -- M.D. Arnold


Daily Tech Digest - April 17, 2019

What SDN is and where it’s going

What SDN is and where it̢۪s going
The driving ideas behind the development of SDN are myriad. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources, everywhere from the data center to the campus or wide area network. Separating the control and data planes is the most common way to think of what SDN is, but it is much more than that, said Mike Capuano, chief marketing officer for Pluribus. “At its heart SDN has a centralized or distributed intelligent entity that has an entire view of the network, that can make routing and switching decisions based on that view,” Capuano said. “Typically, network routers and switches only know about their neighboring network gear. But with a properly configured SDN environment, that central entity can control everything, from easily changing policies to simplifying configuration and automation across the enterprise.” ... Typically in an SDN environment, customers can see all of their devices and TCP flows, which means they can slice up the network from the data or management plane to support a variety of applications and configurations, Capuano said.


Use of AI in wealth management must be applied smartly

“AI can offer a solution to these problems by helping to automate on-boarding processes, provide smarter access to data and create new customer experiences. However, it’s critical any implementation be undertaken smartly. It shouldn’t be a case of automating for automation’s sake. Because of this we see the use of AI best applied in small-steps. “This starts with automating and streamlining manual processes, such as onboarding a new client. This could include all forms of engagement from initial communications, anti-money laundering checks, risk profiling, and all the legal documentation in between. Additionally, by using intelligent information management solutions, staff have the means to simplify how they access, secure, process and collaborate on documentation. Doing so will aid productivity, enabling staff to find and access information across their systems much faster so they can build stronger relationships with their clients.


Security Is Key To The Success Of Industry 4.0

uncaptioned
There is often a perception among manufacturers that cloud computing is less secure than managing data on-site. The reality is that the opposite is true. Network security is closely related to physical access. After all, in an on-site server room, anyone could gain access , pop in a USB stick, and steal sensitive information. Conversely, cloud vendors store data in locations locked down with security guards and numerous physical barriers between any would-be hacker and the target server. Additionally, the cloud offers more network resilience. Businesses that rely on on-premise servers face exposure and operational risk during an act of force majeure, such as a fire or natural disaster. With the cloud, that risk is spread over multiple secure locations, significantly reducing the chance of disruption. Security is an ongoing concern; there will always be new vulnerabilities. Many of the biggest hacks – such as the Petya malware virus that first appeared in 2016 – targeted old Windows technology, which is why it is key to ensure the software is always up to date.


C-Suite: The New Main Target of Phishing

Evolving phishing attacks mean that criminals are continually looking for new ways to completely mask their malicious URLs, especially on mobile devices. They either hide them behind a page like Google Translate that users are already familiar with or completely trick users with custom web fonts and altered characters. One of the latest approaches is to create an Office 365 meeting invite that contains quiz buttons or a poll asking recipients to pick the topic or date for the next meeting; employees that end up clicking are presented with a fake Office 365 login page where they enter their O365 credentials and then lose control over their email account. Another approach is an email that comes from someone you know with a request to take a look at something for them. When you click on the link or attachment, malware installs on your system, takes over your email client, and then emails the same message from you to all your contacts. All is not lost, however. There is a way to help prevent and thwart these attacks. You need a security awareness program that instils a culture of security throughout your organization starting in the boardroom and leading by example.



While this bill remains on the House and Senate floor, there are some ways that state and local governments can begin securing their systems. The first step should be an audit, allowing key decision-makers to get on the same page about the status of their security. This audit should include secretaries of state, members of the academic community and all cybersecurity staff. Everyone should review the cybersecurity controls and the threat vectors that have been exploited in local systems. Improperly informed stakeholders are the greatest vulnerability. U.S. election security needs greater state-by-state alignment. These systems are managed by a hodgepodge of systems that vary from state to state, including paper ballots, electronic screens and Internet voting. Before local elections, midterms and the 2020 presidential election, state officials need to meet with their Boards of Elections and document their end-to-end election process with all of its systems, dependencies and interfaces.


Surviving the existential cyber punch

Top-notch organisations understand the threat environment well. They invest time and effort to maintain situational awareness as to who also values their information and could serve as a threat. They understand that threats may come from many vectors including the physical environment, natural disasters, or human threats. Further, they understand that human threats include such entities as vandals, muggers, burglars, spies, saboteurs, and careless, negligent or indifferent personnel in their own ranks. They invest in information sharing organisations, subscribe to threat information sources, and share their own observations as part of the Cyber Neighbourhood Watch construct. These organisations also know the importance of maintaining positive relationships with the cyber divisions of law enforcement organisations. Even before you have been attacked, your local cyber law enforcement organisation can serve as a rich source of threat intelligence that can help you better manage your cyber risk exposure.


Should that be a Microservice? Keep These Six Factors in Mind


If a module needs to have a completely independent lifecycle, then it should be a microservice. It should have its own code repository, CI/CD pipeline, and so on. Smaller scope makes it far easier to test a microservice. I remember one project with an 80 hour regression test suite! Needless to say, we didn’t execute a full regression test very often. A microservice approach supports fine-grained regression testing. This would have saved us countless hours. And we would have caught issues sooner. ... If the load or throughput characteristics of parts of the system are different, they may have different scaling requirements. The solution: separate these components out into independent microservices! This way, the services can scale at different rates. Even a cursory review of a typical architecture will reveal different scaling requirements across modules. Let’s review our Widget.io Monolith through this lens.


Strong security defense starts with prioritizing, limiting data collection

As cybercrime, user fraud and other security threats become more prevalent and detrimental, the ability to confidently know who you’re dealing with online has become ubiquitous, but what most companies tend to overlook is the responsibility and liability that they automatically assume when they collect and store personal data in order to validate their constituents. As a result, some businesses hold large volumes of personal data because they believe it’s necessary for comprehensive identity and credential verification, but this practice can be risky, especially for companies with weak or limited data protection protocols in place. Data breaches have costly repercussions, including loss of customers, compromised intellectual property, loss of brand trust and, of course, meaningful revenue declines that result, but regulatory penalties can be the most expensive of all consequences. For example, violating GDPR’s strict rules around data privacy can warrant fines of up to €20M, or 4 percent of the worldwide annual revenue of a company.


How botnets pose a threat to the IoT ecosystem 


Botnets are particularly challenging because they evolve over time and new forms constantly emerge, one of which is TheMoon. Benjamin tells Computer Weekly: “Threat researchers at CenturyLink’s Black Lotus Labs recently discovered a new module of IoT botnet called TheMoon, which targets vulnerabilities in routers within broadband networks.” Benjamin explains that a previously undocumented module, deployed on MIPS devices, turns the infected device into a Socks proxy that can be sold as a service. “This service can be used to circumnavigate internet filtering or obscure the source of internet traffic as a part of other malicious actions,” he says.  Attackers are using botnets such as TheMoon for a range of crimes, including credential brute forcing, video advertisement fraud and general traffic obfuscation. “For example, our team observed a video ad fraud operator using TheMoon as a proxy service, impacting 19,000 unique URLs on 2,700 unique domains from a single server over a six-hour period,” says Benjamin.


Cryptocurrencies Will Never Replace Us, Cries Romanian Central Bank Official

Daianu went on to defend the state’s role in issuing currency saying that it was the ‘only possible last-resort lender’. In this regard, the central bank official implied that during a financial crisis, only the state can save the situation: In markets, the state is the only possible last-resort lender. When the banking system was saved, it wasn’t crypto banks that were saved. Central banks intervened by issuing base currency, which was followed by non-conventional measures. This statement is likely to get Daianu in trouble with crypto enthusiasts as the unhindered printing of money is what spawned cryptocurrencies as we know them today. The central bank official also revealed that centralized institutions are yet to understand the importance of the deflationary approach cryptocurrencies such as Bitcoin have taken. This was demonstrated by his statement that the central banks’ answer to cryptocurrencies is to issue a digital currency that can ‘multiply’!



Quote for the day:


"And the trouble is, if you don’t risk anything, you risk more." -- Erica Jong


Daily Tech Digest - April 16, 2019

IT pursues zero-touch automation for application support


Automation is a top goal, from application conception -- or selection, in the case of a third-party business application -- through adoption and use. Executive-level management wants zero-touch automation that controls every application, all the IT resources it runs on and every step of every development and operations process. Zero-touch automation, sometimes called ZTA, covers two specific goals: Sustain an infrastructure that supports applications, databases and workers, and accurately automate application mapping onto IT infrastructure. The former is about analytics and capacity planning, and the latter facilitates terms such as DevOps and orchestration. DevOps, both as technologies and cultural changes that drive faster, better software delivery and operations, predates advances in cloud computing and virtualization. Development teams would build something and turn it over to operations to run, without consideration for the operational deployment requirements.


Nutanix powers Manchester City Council’s IT


The council assessed Nutanix, HPE SimpliVity, HPE Synergy and the VxRail appliance from Dell-EMC and VMware. Farrington says it elected Nutanix running a supermicro appliance because “Nutanix offered the closest to a silver bullet – we could get everything from a single vendor”. In Farrington’s experience, HCI gives the council greater flexibility than traditional IT infrastructure. One benefit is a distributed storage fabric with thin provisioning, which enables the council to make the most of its storage capacity. “We have the ability to scale quickly. The ability to add another storage and compute device quickly is beneficial,” he says. “We also benefit from the deduplication and compression services that are built in.” HCI has also provided a way to bring together the support teams for Windows servers and storage. “I had six teams to look after the datacentre facility,” says Farrington. “Historically, we had two teams – one looked after our 900 Windows servers, the other looked after storage and backup. ...”


Top 10 Features to Look for in Automated Machine Learning


Feature engineering is the process of altering the data to help machine learning algorithms work better, which is often time-consuming and expensive. While some feature engineering requires domain knowledge of the data and business rules, most feature engineering is generic. Look for an automated machine learning platform that can automatically engineer new features from existing numeric, categorical, and text features. You will want a system that knows which algorithms benefit from extra feature engineering and which don’t, and only generates features that make sense given the data characteristics. ... It’s quite standard for machine learning software to train the algorithm on your data. After all, you wouldn’t want to manually do Newton-Raphson iteration would you? Probably not. But, often there’s still the hyperparameter tuning to worry about. Then you want to do feature selection, to improve both the speed and accuracy of a model. Look for an automated machine learning platform that uses smart hyperparameter tuning, not just brute force, and knows the most important hyperparameters to tune for each algorithm.


Machine Learning Widens the Gap Between Knowledge and Understanding


Given how imperfect our knowledge has always been, this assumption has rested upon a deeper one. Our unstated contract with the universe has been that if we work hard enough and think clearly enough, the universe will yield its secrets, for the universe is knowable, and thus, at least, somewhat pliable to our will. But now that our new tools, especially machine learning and the internet, are bringing home to us the immensity of the data and information around us, we’re beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it. Our newly capacious machines can get closer to understanding it than we can, and they, as machines, don’t really understand anything at all. This, in turn, challenges another assumption we hold one level further down: The universe is knowable to us because we humans (we’ve assumed) are uniquely able to understand how the universe works. At least since the ancient Hebrews, we have thought ourselves to be the creatures uniquely made by God with the capacity to receive His revelation of the truth.


How Azure uses machine learning to predict VM failures


On average, disk errors start showing up between 15 and 16 days before a drive fails, and in the last 7 days before it fails reallocated sectors triple and device resets go up tenfold. Behaviour and failure patterns vary from one drive manufacturer to another, and even between different models of hard drive from the same vendor. The telemetry for training the machine learning system has to be collected from different kinds of workloads, because that affects how quickly the failure is going to happen: if the VM is thrashing the disk, a drive with early signs of failure will fail fairly quickly, whereas the same drive in a server with a less disk-intensive workload could carry on working for weeks or months. Azure has a similar machine-learning system that predicts failures of compute nodes. In both cases, instead of trying to definitively predict whether a specific piece of hardware is failing, the systems rank them in order of how error-prone they are. The top systems on the list stop accepting new VMs and have running VMs live-migrated off onto different nodes, and then get taken out of service for testing.



SQL Server users could already run the database themselves on Google Cloud Platform (GCP) via VMs, but Google will fully manage the upcoming service through its Cloud SQL offering, which already features PostgreSQL and MySQL. Google's managed SQL Server service will support all editions of SQL Server 2017, which also has backward compatibility with older versions of the database, said Dominic Preuss, director of product management for Google Cloud, at the Cloud Next conference here this week. AWS has offered a similar service through its Relational Database Service for years. Moreover, Microsoft has worked since 2009 on its Azure SQL managed service. Microsoft's effort has endured some fits and starts over the years. Customers that wanted to move very large SQL Server databases to the cloud had to run them on Azure's VM-based service or break them apart into multiple pieces, given Azure SQL's size limitations.


How to deal with backup when you switch to hyperconverged infrastructure

continuity data backup network server banking
Each HCI vendor offers a hardware configuration using components supported by the virtualization vendors it wishes to support. Since the system comes pre-built you can be assured that all the hardware components will work together and will work with any supported hypervisors. Any incompatibilities between the various components will be handled by the HCI vendor. Some HCI vendors also offer their own hypervisors. The best example of this would be Nutanix with their Acropolis hypervisor. Typically such a hypervisor will offer tighter integration with the HCI hardware and integrated data-protection features. Often, the built-in hypervisor is also less expensive than traditional hypervisors, especially if you take advantage of the native data-protection features. The final type of HCI vendor supports neither VMware nor Hyper-V, nor do they use their own hypervisor. Scale Computing uses the KVM hypervisor, which is open source. Like Nutanix, they do this to reduce their customers’ TCO while offering much of the same functionality that VMware offers. In addition, they also offer integrated data protection.


How AIOps Supports a DevOps World


AIOps can also automate workflows for alerts that require escalation, human attention and/or investigation. For example, alerts on devices supporting business-critical IT services require notification of Level 1 support staff within five minutes of alert receipt. If the alert is from a server and for a specific application, an IT or DevOps user will need to create an incident and route it to the relevant application team. AIOps takes care of this immediately with alert escalation workflows that help program first-response actions for notification and incident creation. Again, this can occur completely unsupervised – no human interaction required – once these policies are established. What’s more, policy-driven AIOps correlates dependencies based on downstream resources or establishes an algorithm-based correlation to address groups of alerts continuously. This drastically frees up time that is typically spent sifting through alert floods, figuring out what to do with them, and then doing it. Advanced AIOps tools use native instrumentation to determine how frequently specific alert sequences occur.


Doing continuous testing? Here's why you should use containers


As nearly every software tester has experienced, test environments are a mixed blessing. On one hand, they allow end-to-end tests that would otherwise have to be executed in production. Without a test environment, testing teams would be shipping code that hasn't been tested across functional boundaries out to users—and hoping for the best. A well-configured and maintained test environment, one that closely mimics production and contains up-to-date code deployments, can provide a safe and sane way for testers to validate a scenario before it gets into the hands of a customer. Problematically, however, test environments encourage a mode of development that is fast becoming outdated: long integration cycles, an untrustworthy main source trunk, and late-stage testing. The most productive, highest-performing engineering teams do just the opposite. They need to be able to trust that code in the main trunk could go to production at any time. They often shift left on quality, with the majority of testing happening before a code change even lands.


Kotlin Multiplatform for iOS Developers


KMP works by using Kotlin to program business logic that is common to your app's various platforms. Then, each platform's natively programmed UI calls into that common logic. UI logic must still be programmed natively in many cases because it is too platform-specific to share. In iOS this means importing a .frameworkfile - originally written in KMP - into your Xcode project, just like any other external library. You still need Swift to use KMP on iOS, so KMP is not the end of Swift.  KMP can also be introduced iteratively, so you can implement it with no disruption to your current project. It doesn't need to replace existing Swift code. Next time you implement a feature across your app's various platforms, use KMP to write the business logic, deploy it to each platform, and program the UIs natively. For iOS, that means business logic in Kotlin and UI logic in Swift. The close similarities between Swift's and Kotlin's syntax greatly reduces a massive part of the learning curve involved with writing that KMP business logic.



Quote for the day:


"To double your net worth, double your self-worth. Because you will never exceed the height of your self-image." -- Robin Sharma