Daily Tech Digest - December 31, 2019

How to proceed with deep legacy hardware

istock-1025444552cloudomputing.jpg
"For folks that have applications that are linked to the hardware environment, it's very different for them to get off it. So we'll work with clients especially when they go beyond the end-of-service life," O'Grady said. "It's mainly government, discrete manufacturing, and banking." The equipment is stored at a former DEC manufacturing facility in Salem, NH. Three former DEC-hands work there as technicians. When customers send in hardware for repair, re-homing, or recycling, "They have a little fun to see if their technician ID is on that machine," O'Grady joked. "Any equipment that's demand-constrained, or supply-constrained in the market, we'll keep it here... I don't think there's anything we haven't been able to find for clients," he said, adding that sometimes he works with museums and related organizations for assistance. "The VAX 6000, these are 30-year-old machines. We've got more than several clients that we're helping out long-term. Everyone does not have enough budget dollars to go around to innovate in new technology," so they focus on stabilizing what already works, he explained. "As long as they can keep the hardware environment viable then it works for them."



Ramp up carefully during AIOps implementation


IT organizations can't simply inject an AIOps tool into their monitoring and management roster and expect positive results. Instead, they need to prep IT workflows and infrastructure for an AI-driven strategy. "The first place that IT leaders start their AI journey tends to be process automation," said Chirag Dekate, a Gartner analyst. IT automation itself doesn't equal AIOps, but it propels organizations in the right direction, as it eliminates menial and repetitive tasks for IT staff. First ensure existing IT automation scripts function as they should, Dekate said. Streamlined data management and collection is another prerequisite for AI in IT operations, according to Ari Silverman, director of platform automation and enterprise architecture at OCC, an equity derivatives clearing organization in Chicago. Silverman's team uses LogicMonitor as an AIOps monitoring tool, primarily for predictive analytics and automated capacity planning and management.


The Best Mesh Routers Of 2019

Netgear Orbi RBK13
Netgear has ditched the towers in their latest iteration of the standard Orbi mesh system. The updated system is rectangular, with waves on top that cleverly hide circulation vents to keep the devices cool. It's a solid system, able to cover up to 6,000 square feet with 1.2Gbps of wireless goodness (if you choose a 4-pack). The only bad thing is that, you guessed it, the app is a bit of a mess. Once you get it setup and running, it's a solid system, but getting there can be an exercise in patience. The base system won't see the satellites, it takes forever for setup steps to complete. In a market with "instant" networks, it's a major gaff. It's a good thing, then that the Orbi system is less expensive than most mesh systems. If you have a large area of space to cover, this is the cheapest way to do it. Where the Orbi WiFi 6 system dominated via networking power, the RBK13 wins by being the cheapest way to get mesh networking into your home. You can get a router and two satellite system for under $200.


What Digital Transformation Is (And Isn't)

Over the past few years, enterprise leaders have become captivated by the idea of digital transformation. Perhaps that shouldn't be surprising given all the hype from analysts and vendors. These days it’s tough to find an enterprise technology product that doesn't advertise itself as a key ingredient in digital transformation. And expert analysis is full of promises that sometimes seem too good to be true. ... Using hardware, software, algorithms, and the Internet, it's 10 times cheaper and faster to engage customers, create offerings, harness partners, and operate your business." That kind of promise is certainly enticing. But it's tough to find agreement on what exactly "digital transformation" means. For some organizations, it just means getting into ecommerce. For others, it involves doing away with paper-based processes and becoming more efficient. Still others are embracing cloud computing, DevOps, automation, the Internet of Things (IoT), and artificial intelligence (AI) to become more competitive. And many seem to be doing most of this and more.


The Top 5 Fintech Trends Everyone Should Be Watching In 2020

The Top 5 Fintech Trends Everyone Should Be Watching In 2020
One of the latest “big things” in fintech is the growth of the mobile payments industry. Consumers want payments to be instant, invisible, and free (IIF). Mobile payment innovations might even do away with our traditional wallets as global consumers are less reliant on cash. Google, Apple, Tencent, and Alibaba already have their own payment platforms and continue to roll out new features such as biometric access control, inducing fingerprint, and face recognition. One of the most popular payment methods in China and used by hundreds of millions of users every day is WeChat Pay. Alibaba’s Alipay, a third-party online and mobile payment platform, is now the world’s largest mobile payment platform. Many mobile payment platforms are building programs and offers based on the user’s purchase history. While many financial institutions are continuing to adopt new technology to enhance operations and improve customer service, these five trends will provide exciting avenues for innovation. Financial institutions realize they must learn how to use fintech to their competitive advantage.


What Is Tech Debt and How to Explain It to Non-Technical People?

We can distinguish at least three types of technical debt. Even if developers don’t compromise on quality and try to build future-proof code, the debts can arise involuntarily. This can be provoked by the constant changes in the requirements or the development of the system. Your design turned out to be flawed and you can’t add new features quickly and easily but it wasn’t your fault or decision. In this case, we’re talking about accidental or unavoidable tech debt. The second type of technical debt is deliberate debt that appears as a result of a well-considered decision. Even if the team understands that there is a right way to write the code and the fast way to write the code it may go with the second one. Often. it makes sense – as in the case with startups aimed at delivering their products to market very quickly to outpace their competitors. Finally, the third type of tech debt refers to situations when developers didn’t have enough skills or experience to follow the best specific practice that leads to really bad code. The bad code can also appear when developers didn’t take enough time and effort to understand the system they are working with, miss things or vice versa perform too many changes.


Experts' cloud predictions for 2020


Not many cloud predictions matter to the general populace, but this one about the power of AI affects everyone. In 2020, explainable AI will rise in prominence for cloud-based AI services -- particularly as enterprises face pushback around the ethical issues of AI. Explainable AI is a technology that provides justification for the decision that it reaches. Both Google and Microsoft have launched explainable AI initiatives, currently in early stages. Amazon is likely to introduce some explainable AI capabilities as part of its AI tools. Through the power of deep learning, data scientists can build models to predict things and make decisions. But this trend can result in black-box algorithms that are difficult for humans to make sense of. The biggest challenge enterprises face is the need to track bias in AI models and identify cases where models lose accuracy.


Wanted: More types of machine learning

Wanted: More types of machine learning
The issue for me is that the ML groups I’ve mentioned are perhaps limiting. Consider a dynamic combining of all types, with adjusting the approach, type, or algorithm during the processing of the training data, either mass loads or transactions. At issue is use cases that don’t really fit these three categories. For example, we have some labeled data and unlabeled data, and we’re looking for the ML engine to identify both the data itself and patterns in the data. Most of us don’t have perfect training data, and it would be nice if the ML engine itself could sort things out for us. With a few exceptions, we have to pick supervised or unsupervised learning and only solve a portion of the problem, and we may not have the training data needed to make it useful. Moreover, we lack the ability to provide reinforcement learning as the data is used within transactional applications, such as identifying a fraudulent transaction ongoing. There are ways to create an “all of the above” approach, but it entails some pretty heavy-duty work for both the training data and the algorithms.


5 open source innovation predictions for the 2020s

Open Source
AI and machine learning have powered these innovations and many of the AI advancements came about thanks to open source projects such as TensorFlow and PyTorch, which launched in 2015 and 2016, respectively. In the next decade, Ferris stressed the importance of not just making AI smarter and more accessible, but also more trustworthy. This will ensure that AI systems make decisions in a fair manner, aren't vulnerable to tampering, and can be explained, he said. Open source is the key for building this trust into AI. Projects like the Adversarial Robustness 360 Toolkit, AI Fairness 360 Open Source Toolkit, and AI Explainability 360 Open Source Toolkit were created to ensure that trust is built into these systems from the beginning, he said. Expect to see these projects and others from the Linux Foundation AI — such as the ONNX project — drive the significant innovation related to trusted AI in the future. The Linux Foundation AI provides a vendor-neutral interchange format for deep learning and machine learning.


private-sign-red-door
What’s interesting is how the HIPAA Security Rule also governs the physical aspect of ePHI and healthcare information systems. Not many information security standards go as deep as HIPAA when it comes to maintaining the physical security of information. The physical facility used to store ePHI needs to have sufficient security measures. Only authorized personnel are allowed access to the hardware and terminals connected to the healthcare information systems. Unauthorized access is considered a serious violation of the HIPAA standard. Logging is also a part of the physical safeguard. Access to terminals and servers must be logged in detail to prevent unauthorized access and allow for an easy audit of the secure facility. Logging on a physical level helps the entire system remain safe. There is also the need for secure devices and terminals, including secure tablets that are now used by medical personnel. It is up to the healthcare service providers to maintain a secure network across their facilities. To complete the equation, policies for hardware disposal and the termination of a healthcare information system must also be put in place.



Quote for the day:


-"Leaders think and talk about the solutions. Followers think and talk about the problems." -- Brian Tracy


Daily Tech Digest - December 30, 2019

Doing the right thing: The rise of ethics in tech

Doing the right thing: The rise of ethics in tech header
"Culture means a lot of things," Schlesinger continued. "Culture in the broadest terms—in terms of tools, processes, norms, narratives—is bringing all of the things that ladder up to creating the kind of organization that is needed to then build the kind of products, features, and tools that society can benefit from." Today, we're at the point with ethics in technology that we were with automobiles in 1966, he noted, after Ralph Nader's 1965 book, "Unsafe at Any Speed," exposed and heightened awareness around the dangerous engineering practices involved in building cars at the time, resulting in new safety initiatives. "We've all awakened, and it's kind of unique that we're even having this conversation at a mainstream tech conference," Schlesinger said. "We're at the beginning stages of this evolution toward that kind of informed, just, rewarding culture in tech that holds itself accountable for the kinds of things we want to build. And ultimately, that is about showing our moral math." However, Paula Goldman, chief ethical and humane use officer at Salesforce, argued that we're not in 1966 but the early 1900s, with its waves of innovation and new norms.



Financial Services Could Never Do This Before

Financial Services Could Never Do This Before
The cloud offers a tremendous new opportunity to scale your infrastructure on-demand and offload some of the expense of data management, especially as it relates to new workloads or testbed environments. Yet the reality is, for most financial services institutions, much of the data resides on-premise in data centers and will continue to for a long time – dictated by regional jurisdictions, data security concerns or just historical preference to control the data. Financial services organizations need a new approach. Flexibility to manage data across environments is critical. Today, organizations need an enterprise data cloud that offers the ability to ingest, process, store, analyze, model any type of data (structured, unstructured, or semi-structured data), regardless of where it lands — at the edge, on premise, in the data center, or in any public, private, or hybrid clouds.


GDPR: Moving Beyond Compliance


While more organisations move to develop a senior leadership approach to data privacy, in the year and a half since GDPR, a growing number of businesses are trying to put data privacy on the radar of their entire employee base. In these organisations, it is becoming everyone’s mission to have an understanding of provenance and the use of information, with everyone taking accountability for how the organisation collects, uses, and shares personal information. The idea of accountability is that “we say what we do and we do what we say” and, importantly, “we stand by doing what we do.” This culture of accountability is something that is also being extended to how organisations talk to their customers about data privacy. Increasingly, businesses are being open and inclusive, telling customers about what they are doing with personal information and how they are protecting it. In doing so, they recognise the need to close the gap in terms of the expectations, responsibilities, and actions relevant to privacy protections and information ethics. With big data breaches, such as recent ones that exposed the data of almost 400 million people, it is no wonder that the general public is becoming wary about parting with their personal information.


Cisco 2020: Challenges, prospects shape the new year


Cisco is attacking the cloud provider market by addressing its hunger for higher bandwidth and lower latency. At the same time, the vendor will offer its new technology to communication service providers. Their desire for speed and higher performance will grow over the next couple of years as they rearchitect their data centers to deliver 5G wireless services to businesses. For the 5G market, Cisco could combine Silicon One with low-latency network interface cards from Exablaze, which Cisco plans to acquire by the end of April 2020. The combination could produce exceptionally fast switches and routers to compete with other telco suppliers, including Ericsson, Juniper Networks, Nokia and Huawei. Startups are also targeting the market with innovative routing architectures. "Such a move could give Cisco an edge," said Tom Nolle, president of networking consultancy CIMI Corp., in a recent blog.


Don’t Let Impostor Syndrome Derail Your Next Interview


Even when you’re well prepared for an interview and know that you’re perfectly qualified for the job, it can still be a nerve-racking experience to walk into a room full of strangers and prepare to be judged. To manage your jitters, start by controlling the controllable elements of your interview experience. If you’re worried about arriving punctually, for example, try taking multiple routes to your destination before the day of the interview to see which one gets you there fastest, with the least amount of traffic. Managing nervousness around the interview itself is another area where you can be proactive. In Cliff’s case, he decided to build in extra time before the interview for a 10-minute walk around the block. During this scheduled pre-meeting stroll, Cliff planned to focus on deep breathing to help ratchet down his stress response. I recommended that while walking, he take a minute or two to inhale for a count of four seconds, hold his breath for two seconds, and then exhale for a count of four seconds. He found this process deeply calming, and it allowed him to enter the interview setting feeling more confident and settled.


5 Lessons George Lucas Taught Us About Innovation

Image: Pixabay
Experimentation is an important part of any innovative team. In 1979, Lucas created The Graphics Group as part of Lucasfilm’s computer division and hired Edwin Catmull to lead it. The goal of this group was to invent new digital production tools for use in live action films. They were successful in this goal and even created software used in medical and satellite imagery. However, Catmull’s team really longed to create full-length computer-generated imagery (CGI) animated films. As they struggled to build a profitable business, neither were achieving their goals. Lucas put it this way, “I didn’t want to run a company that sold software, and John [Lasseter] and Ed wanted to make animated films.” Eventually it became clear that… Someone had to be the first to push the boundaries of blending CGI and live action beyond short special-effects shots. No longer was storytelling constrained by the limitation of a human actor. This risk gave other filmmakers a platform to build from, slowly crafting new characters to where we are today; where CGI characters are nearly indistinguishable from human ones.


The Evolution Of Data Protection

Photo:
DLP is only as good as the classification rigidity enforced by the organization. Classification is always too rigid and can't keep up with fluid data movement. For DLP to prevent data from egress, data must be classified correctly. Classification is complicated and fragile. What is sensitive today is not sensitive tomorrow and vice versa. Classification turns into an endless battle of users trying to manage the classification of data. Ultimately, classification and DLP deteriorate over time. DLP adds an extremely high operational overhead, as it requires users to be classification superstars, and even then, mistakes will happen. Desjardins Group, a Canadian bank, recently made news for a malicious insider who obtained information on 2.7 million customers and over 170,000 businesses. The exact details of the breach haven't been made public yet, but DLP solutions are standard in all financial institutions. PGP's encryption is a privacy tool. Users can encrypt their data so others can't access it, but PGP fails once users try to share data with other users.


California’s privacy law means it’s time to add security to IoT

California law requires IoT to have security.
If you think about the evolution of the marketplace, we’re at a state now where the technology has gotten us to a certain point. We have connectivity. Wi-Fi has gotten to a point where it’s ubiquitous. Access to the internet is pretty pervasive around the world. That’s spun this billions-of-units vision, saying that everything is going to be connected. That’s interesting, and from a technology perspective, we’re seeing that in our houses. We see the ubiquity of these connected devices in our homes. But what quickly happens is you get what I call a normative period where societal issues come to the fore, the biggest one being privacy. You go into this normative period now where everyone says we need privacy, and then you have to have some sort of governance over the devices to create an environment where you can deliver that capability in a cost-effective way. What I’m saying specifically, as it relates to privacy today, is that there are no standards. There is no threshold. Therefore, these devices can be anywhere from having zero security capability to everything in between, across the spectrum.


IoT vendor Wyze confirms server leak

Wyze
Song confirmed that the leaky server exposed details such as the email addresses customers used to create Wyze accounts, nicknames users assigned to their Wyze security cameras, WiFi network SSID identifiers, and, for 24,000 users, Alexa tokens to connect Wyze devices to Alexa devices. The Wyze exec denied that Wyze API tokens were exposed via the server. In its blog post, Twelve Security claimed they found API tokens that they say would have allowed hackers to access Wyze accounts from any iOS or Android device. Second, Song also denied Twelve Security's claims they were sending user data back to an Alibaba Cloud server in China. Third, Song also clarified Twelve Security claims that Wyze was collecting health information. The Wyze exec said they only collected health data from 140 users who were beta-testing a new smart scale product. Song didn't deny Wyze collected height, weight, and gender information. He did, however, deny others. "We have never collected bone density and daily protein intake," the Wyze exec said. "We wish our scale was that cool."


How one bizarre attack laid the foundations for the malware taking over the world


The first instance of what we now know as ransomware was called the AIDS Trojan because of who it was targeting – delegates who'd attended the World Health Organization AIDS conference in Stockholm in 1989. Attendees were sent floppy discs containing malicious code that installed itself onto MS-DOS systems and counted the number of the times the machine was booted. When the machine was booted for the 90th time, the trojan hid all the directories and encrypted the names of all the files on the drive, making it unusable. Victims saw instead a note claiming to be from 'PC Cyborg Corporation' which said their software lease had expired and that they needed to send $189 by post to an address in Panama in order to regain access to their system. It was a ransom demand for payment in order for the victim to regain access to their computer: that made this the first ransomware. Fortunately, the encryption used by the trojan was weak, so security researchers were able to release a free decryption tool – and so started a battle that continues to this day, with cyber criminals developing ransomware and researchers attempting to reverse engineer it.



Quote for the day:


"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer


Daily Tech Digest - December 29, 2019

Are we running out of time to fix aviation cybersecurity?

cockpit airline airplane control pilot by southerlycourse getty
Flying remains one of the safest ways to travel, and that's due in large part to continuous efforts to improve air safety. Cultural norms in aviation have rewarded and incentivized a whistleblowing culture, where the lowliest mechanic can throw a red flag and stop a jet from taking off if he notices a potential safety issue. Contrast that with the often-fraught issue of reporting security vulnerabilities, where shame and finger-pointing and buck passing are the norm. The report highlights the problem, writing, "Across much of the cybersecurity landscape, there arguably remains a stigma about discussing cybersecurity vulnerabilities and challenges that go beyond managing sensitive vulnerabilities." A wormable exploit or a backdoored software update — like the backdoored MeDoc software update that started the Petya worm — could cause safety issues at scale. It’s unclear that the aviation industry’s traditional safety thinking is sufficient to meet this challenge. For instance, the report calls out the need for greater information sharing on aviation cybersecurity threats, acknowledging the risk of a Maersk-like scenario and observing rather drily that "other sectors have seen the scale and costs from a single vulnerability and 'wormable' exploit.



AI vs. Machine Learning: Which is Better?


Artificial intelligence came from the word “Artificial” and “Intelligence. Artificial means it is created by a non-natural thing or a human, and intelligence means the ability to think and understand things. Some people think artificial intelligence is a system, but the fact is, it’s within the system. AI has stipulated rules that were pre-determined by an algorithm that was set by a person. The appearance of AI is more often on smartphones, desktop computers, and smartwatch. ... Machine learning is capable of learning from itself. It is a computer system than can adopt knowledge and solve a problem based on its experience. The ML acts on the provided data that was inputted by humans and predicted accurate solutions based on the information that was gathered by the machine/computer. Machine learning has a different algorithm for artificial intelligence. The machine learning algorithm is capable of deciding on its own. The artificial intelligence is capable of answering a pre-determined question with a pre-determined solution.


How AI, Analytics & Blockchain Empower Efficient & Intelligent Supply Chain?


Regulating its promise to disrupt every industry for better, AI is transforming supply chain management as well. The technology has a number of applications in the supply chain which include extraction of information, analysis of data, planning for supply and demand, and better management for autonomous vehicles and warehouses. AI-enabled NLP scans through the supply chain documents like contracts, purchase orders, chat logs with customers or suppliers and significant others to identify commonalities which are used as feedback to optimize SCM as part of continual improvement. ML helps people manage the flow of goods throughout the supply chain while ensuring that raw materials and products are in the right place at the right time. Also, technology can source and process data from different areas and forecast future demand based on external factors. And most importantly, AI helps analyze warehouse processes and optimize the sending, receiving, storing, picking and management of individual products.


Netgear Nighthawk M2 Mobile Router, hands on

netgear-m2-on-table.jpg
Very much designed as a 'travel router' the square lozenge of the M2 measures 105mm by 105mm by 20.5mm and weighs 240g. It's easy to slip into a briefcase or bag when you're travelling, and won't weigh you down.  Like its M1 predecessor, the M2 relies on 4GX LTE mobile broadband, as Netgear argues that 5G networks aren't sufficiently widespread to justify the extra cost of adding 5G support. However, Category 20 4GX LTE support means that the M2 doubles its maximum download speed from 1Gbps to 2Gbps, although the upload speed remains the same at 150Mbps. It then uses dual-band 802.11ac to create its own wi-fi network, which can support connections from up to 20 separate devices. The M2 also gains a larger 2.4-inch touch-sensitive display that allows you to quickly configure the router, and to monitor signal strength, data usage and other settings. The Netgear Mobile app provides similar controls for Android and iOS devices, and there's a browser interface available for computers as well.


What is Jenkins? The CI server explained

What is Jenkins? The CI server explained
Today Jenkins is the leading open-source automation server with some 1,400 plugins to support the automation of all kinds of development tasks. The problem Kawaguchi was originally trying to solve, continuous integration and continuous delivery of Java code (i.e. building projects, running tests, doing static code analysis, and deploying) is only one of many processes that people automate with Jenkins. Those 1,400 plugins span five areas: platforms, UI, administration, source code management, and, most frequently, build management. Jenkins is available as a Java 8 WAR archive and installer packages for the major operating systems, as a Homebrew package, as a Docker image, and as source code. The source code is mostly Java, with a few Groovy, Ruby, and Antlr files. You can run the Jenkins WAR standalone or as a servlet in a Java application server such as Tomcat. In either case, it produces a web user interface and accepts calls to its REST API. When you run Jenkins for the first time, it creates an administrative user with a long random password, which you can paste into its initial webpage to unlock the installation.


Process Mining vs. Business Process Discovery

Harvard Business Review – a publication that’s unfortunately becoming increasingly political by the day – published an article about process mining earlier this year, written by two individuals who have been involved with the field for four decades now. According to the experts, process mining solves a few fundamental challenges associated with business process management. These are: Companies tend to spend too little time or too much time analyzing “as is” business processes; and There is a lack of connections between business processes and an organization’s enterprise information systems. Starting with the first bullet point, we’d argue that for most companies, if you can’t figure out the optimal time to spend analyzing an existing business process, hire better BPAs. The second bullet point describes the inability to capture the “interoperability’ of processes and information systems. Fair enough. Organizations are incredibly complex entities, and one department might interact with 100s of internal systems. Enter process mining and a German company called Celonis.


Azure Cosmos DB — A to Z


Behind the scenes, Cosmos DB uses distributed data algorithm to increase the RUs/performance of the database, every container is divided into logical partitions based on the partition key. The hash algorithm is used to divide and distribute the data across multiple containers. Further, these logical containers are mapped with multiple physical containers (hosted on multiple servers). Placement of Logical partitions over physical partitions is handled by Cosmos DB to efficiently satisfy the scalability and performance needs of the container. As the RU needs increase, it increases the number of Physical partitions (More Servers). As the best practice, you must choose a partition key that has a wide range of values and access patterns that are evenly spread across logical partitions. For example, if you are collecting some data from multiple schools, but 75% of your data is collected from one school only, then, it’s not a good idea to create the school as the partition key.


Digital process automation vs. robotic process automation

Digital process automation can also be easily confused with another similar term: robotic process automation. Robotic process automation (RPA) uses more intelligent automation technology -- such as artificial intelligence (AI) and machine learning (ML) -- to handle high-volume, repeatable tasks. RPA can be used to automate queries and calculations as well as maintain records and transactions. This is typically done using bots such as probots, knowbots or chatbots. What distinguishes RPA from other forms of IT automation is the ability of the RPA software to be aware and adapt to changing circumstances, exceptions and new situations. Whereas DPA comes from BPM, and BPA comes from infrastructure management, RPA is not considered a part of the infrastructure. Instead, RPA sits on top of an organization's infrastructure; this allows an organization to quickly implement a digital process technology.


Data governance & retention in your Microsoft 365 tenant

Image showing workers in an office.
Data governance has relied on transferring data to a third-party for hosting an archive service. Emails, documents, chat logs, and third-party data (Bloomberg, Facebook, LinkedIn, etc.) must be saved in a way that it can’t be changed and won’t be lost. Data governance is part of IT at the enterprise level. It serves regulatory compliance, can facilitate eDiscovery, and is part of a business strategy to protect the integrity of the data estate. However, there are downsides. In addition to acquisition costs, the archive is one more system that needs ongoing maintenance. When data is moved to another system, the risk footprint is increased, and data can be compromised in transit. An at-rest archive can become another target of attack. When you take the data to the archive, you miss the opportunity to reason over it with machine learning to extract additional business value and insights to improve the governance program. The game changer is to have reliable, auditable retention inside the Microsoft 365 tenant.


Top 6 Software Testing Trends to Look Out in 2020

Despite the promising prospects of AI/ML application in software testing, experts still regard AI/ML in testing is still in its infancy stage. Therefore, it remains numerous challenges for the applications of AI/ML in testing to move on to the maturity level. The rising demands for AI in testing and QA teams signal that it’s time for Agile teams to acquire AI-related skill sets, including onboarding data science, statistics, mathematics. These skill sets will be the ultimate complementation to the core domain skills in test automation and software development engineering testing (SDET). Additionally, successful testers need to adopt a combination of pure AI skills and non-traditional skills. Indeed, last year, a variety of new roles have been introduced such as AI QA analyst or test data scientist. As for automation tool developers, they should focus on building tools that are practical. Companies are utilizing PoCs and reassessing options to make the best use of AI and considering budgets.



Quote for the day:


"Everyone wants to be appreciated. So if you appreciate someone, don't keep it a secret." -- Mary Kay Ash


Daily Tech Digest - December 28, 2019

Taiwanese Police Arrest Miner Accused of Stealing Millions in Power
Proof of Work was the original consensus mechanism used by Bitcoin and latterly implemented on the likes of Ethereum, Litecoin, and Dogecoin. PoW involves performing thousands of calculations per second to find the solution to a mathematical problem that is hard to solve but easy to verify. The Proof of Work system incentivizes miners by rewarding them with coins for each new block found. Although it remains an extremely fair and secure consensus mechanism, PoW has been criticized over the years. Much has been made, for its example, of its high energy and resource requirements: the computational power needed for miners to solve complex mathematical puzzles ahead of their peers is huge. Critics lose sight of the fact that this is a feature and not a bug: the difficulty of cheating Proof of Work is what makes it so robust, and why the Bitcoin network is so valuable. Even the most well funded adversary would struggle to obtain the hashpower necessary to control the network and double spend coins.


DevOps in the enterprise requires focus on security, visibility

In this episode of Test & Release, Pariseau, who writes for SearchSoftwareQuality and SearchITOperations, discusses technology topics that will matter in 2020. She also shares experiences from containers, cloud and DevOps conferences such as KubeCon and DevSecCon, where diverse leaders related the many challenges associated with DevOps and Agile transformation. Success for DevOps in the enterprise starts with small wins and a consistent march toward improvement. "It's clear that enterprises have had to handle this digital transformation in phases," Pariseau said. "You have to eat the elephant one bite at a time." Take security in the SDLC. DevOps purists, she says, intended for business and security concerns to get rolled into the natural cadence of a lifecycle. However, as many teams struggle with pipeline complexities bringing DevOps to mainstream enterprise IT, those concerns took a back seat. Now, enterprises are putting security back into focus, as high-profile breaches carry potentially disastrous repercussions.


A decade of fintech megatrends
Forecasts that this sector will cross the $25 billion mark by 2025 seem grossly inadequate to me. Libra has awoken central banks, policy makers and regulators with the likelihood that a dominant global industry led stablecoin may emerge. The FSB, BIS, and IOSCO are all focused on analysing the market impact of stablecoins and central banks are reviewing their plans for digital fiat currencies. Libra may have fumbled in the early days with its own narrative, but its impact has been sensational. Following the ICO crash and pullback of the bitcoin price in 2018 the sector has regrouped with an enterprise focus: new digital assets and derivatives, and a focus on exchange, custody and settlement infrastructure. Market leaders include R3 with its Corda platform and Six the Swiss stock exchange, who will partner to platform digital assets; a JP Morgan Coin for client payments; and Fidelity Digital Assets platform for institutional clients. After Xi Jinping's comments expect the Chinese government to push the development of blockchain technology, ahead of the application of cryptocurrencies which are banned in China.


Remme technology reduces passwords and human failure to present a high-end security system that is simple to use without jeopardizing security. REMME solves the issue of central servers that can be hacked, as well as restricting attacks, such as phishing, server, and password violation, and password reuse attacks, with the help of blockchain. Users can utilize the free version of the system in some 10,000 logins per month. Up to 100,000 logins per month can be used for $199. It’s inexpensive as compared to its competitors. Remme is headquartered in Ukraine. Remme has been in existence since 2015, and its name is becoming famous in the industry. It serves a wide range of businesses, but most companies have to safeguard their clients’ sensitive data. But, anyone can use it, including small organizations and individuals. Remme has two essential strengths. Firstly, it uses new technology that is hack-proof, so it guarantees client data security and avoids any possible damages or losses.


Tesla describes its solution in the patent application: “A data pipeline that extracts and provides sensor data as separate components to a deep learning network for autonomous driving is disclosed. In some embodiments, autonomous driving is implemented using a deep learning network and input data received from sensors. For example, sensors affixed to a vehicle provide real-time sensor data, such as vision, radar, and ultrasonic data, of the vehicle’s surrounding environment to a neural network for determining vehicle control responses. In some embodiments, the network is implemented using multiple layers. The sensor data is extracted into two or more different data components based on the signal information of the data. For example, feature and/or edge data may be extracted separate from global data such as global illumination data into different data components. The different data components retain the targeted relevant data, for example, data that will eventually be used to identify edges and other features by a deep learning network. ..."


The Year of Magecart: How the E-Commerce Raiders Reigned in 2019

While the retail giant notified customers on Nov. 15, the company has yet to release details of the attack. For example, hHow many customers were impacted by the breach remains unknown. Researchers, however, believe the intruders belong to a loose grouping of cybercriminal gangs known as Magecart groups, named for their habit of skimming financial details from shopping carts and, often, the Magento e-commerce platform. This particular group had upped its game: The attackers had tightly integrated their information-gathering code into two parts of the website and had knowledge of how Macy's e-commerce site functioned, security firm RiskIQ said in a Dec. 19 analysis. "The nature of this attack, including the makeup of the skimmer and the skills of the operatives, was truly unique," said Yonathan Klijnsma, head researcher with RiskIQ, in his analysis. "I've never seen a skimmer so meticulously constructed and able to play to the functionality of the target website." The Macy's breach is the latest success for the broad class of Magecart attackers.


In its traditional configuration using value functions or policy search the RL algorithm essentially conducts a completely random search of the state space to find an optimum solution. The fact that it is in fact a random search accounts for the extremely large compute requirement for training. The more sequential steps in the learning process, the greater the search and compute requirement. The new upside down approach introduces gradient descent from supervised learning which promises to make training orders of magnitude more efficient. Using rewards as inputs, UDRL observes commands as a combination of desired rewards and time horizons. For example “get so much reward within so much time” and then “get even more reward within even less time”. As in traditional RL UDRL learns by simply interacting with its state space except that these unique commands now create learning based on gradient descent using these self-generated commands. In short this means training occurs against trials that were previously considered successful (gradient descent) as opposed to completely random exploration.


The patent, granted earlier this month after being filed back in March 2015, outlines a system that allows users to make bitcoin payments using an email address linked to a cryptocurrency wallet. "Bitcoin can be sent to an email address," the patent filing read, detailing the advantages of the technology. "No miner's fee is paid by a host computer system. Instant exchange allows for merchants and customers to lock in a local currency price. A tip button rewards content creators for their efforts. A bitcoin exchange allows for users to set prices that they are willing to sell or buy bitcoin and execute such trades." However, the system takes 48-hours for the transaction to clear once the receiver has confirmed the payment and there doesn't appear to be support for other major cryptocurrencies. The technology could mean a big step for mainstream adoption of bitcoin—something that's been a long-term goal of Coinbase's CEO Brian Armstrong.



An essential API test verifies that an API is capable of connection, and that it is sending and receiving data. At some level, the QA team should include security testing. API messages must verify security at both ends of a data exchange. In addition to connectivity and security, verify database validity. If the APIs allow invalid data during an exchange, the database and applications are susceptible to failure from an unexpected source. Data validity is critical for API, database and application communication. To vet these areas, make sure to test error conditions as well. The API developer should share the error codes that will generate when the system rejects an incoming message for security or data issues, when messages are in the wrong format and when the API endpoint is down or non-functional. The QA engineer should verify that the API returns the data the IT organization expects across systems. Many applications have integrated components, such as a web portal and a mobile app.


A CISO Offers Insights on Managing Vendor Security Risks

"You should absolutely be applying some third-party risk assessment methodology," Decker stresses in an interview with Information Security Media Group. "Look at these third-party organizations and understand what type of security practices they have in place. You need to understand what kind of data you're putting into those systems and how important these third-party suppliers are to your operations." For inherently high-risk vendors, he says, organizations should "have a corresponding level of scrutiny and control around how those vendors are actually applying security around your systems, or as an entry point into your environment." Organizations need to ensure that the terms and conditions that they include in their contracts with vendors "not only have some technical components about the data that's going into their environment, [but also] the components where they're connecting to, a back channel," he says. They not only need to specify what kinds of controls they want vendors to have in place, but also "make sure there are the appropriate liabilities that are truly accounted for in that contract," he adds.



Quote for the day:


"Problem-solving leaders have one thing in common: a faith that there's always a better way." -- Gerald M. Weinberg


Daily Tech Digest - December 27, 2019

Exposed databases are as bad as data breaches, and they're not going anywhere


If your data is exposed in an unsecured database, experts say you have to treat the situation the same way you would if the data had been stolen. "You need to engage proactively in minimizing your risk," said Eva Velasquez, president of the Identity Theft Resource Center. Medical service provider Tu Ora Compass Health said the same thing to nearly 1 million patients when it revealed that its poorly configured website had exposed patient health insurance data. Patients should "assume the worst" and act as though hackers had accessed the data, the company said. What's the worst that can happen? Stolen information makes it easier for identity thieves to pretend to be you. When combined with what you share on social media, for example, your medical record number could allow someone else to use your health insurance. The Identity Theft Resource Center hosts a service called Breach Clarity that helps you decide what steps to take after your data is compromised. The advice depends on what kind of information was involved. If your log-in credentials are exposed, you'll want to reset your passwords. If it's your Social Security number, you'll want to watch your credit report for signs that someone's opening up new lines of credit in your name.



Introduction to ELENA Programming Language

Methods in ELENA are similar to methods in C# and C++, where they are called "member functions". Methods may take arguments and always return a result (if no result provided "self" reference is returned). The method body is a sequence of executable statements. Methods are invoked from expression, just as in other languages. There is an important distinction between "methods" and "messages". A method is a body of code while a message is something that is sent. A method is similar to a function. in this analogy, sending a message is similar to calling a function. An expression which invokes a method is called a "message sending expression". ELENA terminology makes a clear distinction between "message" and "method". A message-sending expression will send a message to the object. How the object responds to the message depends on the class of the object. Objects of differents classes will respond to the same message differently, since they will invoke different methods. Generic methods may accept any message with the specified signature.


Amazon now allows developers to combine tools such as Amazon QuickSight, Aurora, and Athena with SQL queries and thus access machine learning models more easily. In other words, developers can now access a wider variety of underlying data without any additional coding, which makes the development process faster and easier. Amazon’s Aurora is a MySQL-compatible database that automatically pulls the data into the application to run any machine learning model the developer assigns it. Then, developers can use the company’s serverless system known as Athena to obtain additional sets of data more easily. Finally, the last piece of the puzzle is QuickSight, Amazon’s tool used for creating visualizations based on available data. The combination of these three tools will provide a far more efficient approach to the development of machine learning models. During the announcement, Wood also mentioned a lead-scoring model that developers can use to pick the most likely sales targets to convert.


istock-802780432.jpg
Ranking the obstacles involved in firewall management, 67% of those surveyed pointed to the initial deployment and tuning measures, 67% cited the process of implementing changes, and 61% referred to the procedure for verifying changes. Cost is another hurdle with firewalls. Depending on the size of the organization and the type of firewall, a single unit can cost anywhere from hundreds to thousands to tens of thousands of dollars and up. Some 68% of the respondents said they have a hard time receiving the necessary initial budget to purchase firewalls, while 66% bump into difficulty getting the funding to operate and maintain them. Tweaking the rules on a firewall is yet another taxing task. Changes to code, applications, and processes can occur fast and furiously, requiring frequent updates to firewall rules. But a single firewall update can take one to two weeks, according to the survey. And such changes can sometimes be trial and error. More than two-thirds of the respondents cited the difficulty of testing changes to firewall rules before deploying them. The lack of a proper testing platform can lead to misconfigured rules that break applications.


Hugh Owen, Executive Vice President, Worldwide Education at MicroStrategy asserts "Enterprise organizations will need to focus their attention not just on recruiting efforts for top analytics talent, but also on education, reskilling, and upskilling for current employees as the need for data-driven decision making increases—and the shortage of talent grows." Skills shortages show up everywhere, especially in AI. John LaRocca, Managing Director for Europe/NA Operations at Fractal Analytics, comments that "The demand for AI solutions will continue to outpace the availability of AI talent, and businesses will adapt by enabling more applications to be developed by non-AI professionals, resulting in the socialization of the process."  In that same vein, noted industry expert Marcus Borba, at Borba Consulting, remarks, in a report from MicroStrategy, that "the demand for development in machine learning has increased exponentially. This rapid growth of machine learning solutions has created a demand for ready-to-use machine learning models that can be used easily and without expert knowledge."


Google Publishes Its BeyondProd Cloud-native Security Model

In zero-trust networking, protection of the network at its outer perimeter remains essential. However, going from there to full zero-trust networking requires a number of additional provisions. This is by no means easy, given the lack of standard ways to do it, adds Brunton-Spall: You can understand [it] from people who've done this, custom-built it. If you want to custom build your own, you should follow the same things they do. Go to conferences, learn from people who do it. Filling this gap, Google's white-paper sets a number of fundamental principles which complement the basic idea of no trust between services. Those include running code of known provenance on trusted machines, creating "choke points" to enforce security policies across services, defining a standard way to roll out changes, and isolating workloads. Most importantly, These controls mean that containers and the microservices running inside them can be deployed, communicate with one another, and run next to each other, securely, without burdening individual microservice developers with the security and implementation details of the underlying infra structure.


apples oranges slices mixture puzzle balance opposites fruit  savatore gersace flickr
What if we’re leading change all wrong? The book “Make it Stick: The Science of Successful Learning,” by Peter C. Brown, Henry L. Roediger III and Mark A. McDaniel highlights stories and techniques based on a decade of collaboration among eleven cognitive psychologists. The authors claim that we’re doing it all wrong. For example, we attempt to solve the problem before learning the techniques to do so successfully. Using the right techniques is one of the concepts that the authors suggest makes learning stickier. Rolling out data-management initiatives is complex and usually involves a cross-functional maze of communications, processes, technologies, and players. Our usual approach is to push information onto our business partners. Why? Well, of course, we know best. What if we changed that approach? This would be uncomfortable, but we are talking about getting other people to change, so maybe we should start with ourselves. Business relationship managers stimulate, surface, and shape demand. They’re evangelists for IT and building organizational convergence to deliver greater value. There’s one primary method to accomplish this: collaboration.


Setting Management Expectations in Machine Learning

Business leaders often forget that machine learning algorithms are not a panacea that can be thrust into a given use case and expected to magically deliver value on their own. Algorithms rely on large, accurate, datasets to train and generate predictions. Data science is just the end result of a long process of data collection, cleansing, and tagging that requires significant investment. That’s why it’s important to have a robust Data Governance strategy in place at your business. Unfortunately, management often forgets this. Having failed to make the necessary investments in Data Governance, they nonetheless expect their data scientists to “figure it out.” Even where management has made the necessary investments in Data Governance and you have access to a large, healthy, internal dataset, there are certain functions you will still have difficulty performing. These most prominently include anything that requires you to leverage customer data. The frequency of widespread breaches and scandals involving the misuse of data, along with the accompanying rise in government regulation, has made it more difficult than ever to leverage customer data within businesses’ ML systems.



"As more states follow California's lead and push forward with new privacy laws, we'll likely see increased pressure on the federal government to take a more proactive role in the privacy sphere," said Mary Race, a privacy attorney in California. The Senate Commerce Committee held a hearing in December to discuss two potential frameworks, both of which seek to set a federal standard and designate regulators to enforce the law. Lawmakers expressed bipartisan support for privacy laws though no legislation has moved forward. Still, several key aspects of a prospective law were up for debate at the hearing. The Republican framework, submitted by Sen. Roger Wicker of Mississippi, would preempt state data privacy laws, and would limit enforcement to the FTC. Sen. Maria Cantwell of Washington, who submitted the Democratic bill, has said she's considering letting consumers directly sue companies, and would not supersede state laws. While federal law supersedes state law in general, many federal laws leave room for states to enact tougher requirements on top of the baseline set by US legislators.



How Data Subject Requests are at the heart of protecting privacy

Not only has data proliferated, but it’s also mutated into derivative forms. Customer data is often collected across multiple channels without being linked to a master identifier, and the definition of what is considered PII is continuing to change. The other reason the DSR search process is difficult is that many organizations still rely on questionnaires and spreadsheets for data discovery. These manual processes are inefficient at best, and incredibly inaccurate at worst. Consider that a single bank transaction might be replicated across 100 systems. Successfully fulfilling a DSR for that customer could require multiple people to manually search all those systems, and the accuracy and completeness may be questionable. Not only would the individual’s privacy be compromised, but the bank would also have to defend the results with regulators. In an age of big data and automation, relying on manual processes to fulfill privacy laws seems unbelievably arcane, if not impossible given the sheer volume of data companies have. Fortunately, many organizations are beginning to realize the complexity and importance of the DSR process and are looking to automate it.



Quote for the day:


"People not only notice how you treat them, they also notice how you treat others." -- Gary L. Graybill


Daily Tech Digest - December 26, 2019

Decade in review: Reflections on the last 10 years in the tech industry

2020 past circle #3
Technology has irreversibly gone from the sole province of the back office to a key element of most organizations' products and services, and oftentimes a strategic and competitive differentiator. This transition is furthering a trend from earlier in the decade, whereby technology in some organizations was splitting between core "keep the lights on" services in the back office, and technologies that powered products and transformational initiatives. In extreme cases, the CIO has become a utility player while other functions like marketing or product development get the preponderance of a company's technology spend. On the other extreme are CIOs who have become brokers of technology services that power marketing, product development, and digital transformation while pushing management of back-office systems to staff or an external vendor. As back-office systems increasingly become commodities that can be purchased from a cloud vendor, it appears that the operationally oriented CIO will become increasingly less important and disappear from the executive ranks at many companies.


Corporate IT training gets high profile in 2020


Multiple training methods may prove necessary, but Becirovic advised establishing a common platform for delivery -- instead of creating a series of one-off training vehicles. The company's Accenture Future Talent Platform plays that unified-platform role. "It's difficult to create a one-size-fits-all model for upskilling talent," Becirovic said of platform-building. "But the most effective approaches focus on 'learning anytime, anywhere' through digital technologies. Mobile- and tablet-based learning are gaining traction -- mobile learners study on average 40 extra minutes per week." Becirovic also cited the power of social media and collaboration. "Engaging learners through social collaboration enhances learning," he said, noting employees have been found to spend three times more time on social-enabled tools.


AI makes inroads in life sciences in small but significant ways: Lantern Pharma’s quest

lantern-pharma-radr-graphic.png
Lantern is working on three different therapies that had been left aside after showing some progress in clinical trials but not winning approval. They address a variety of cancers, including prostate, ovarian, and non-small-cell lung cancer. Lantern is a tiny company with just nine full-time employees and three contractors. They are not going to run massive, multimillion-dollar drug trials on their own. But what they can do is test drugs in simulation before a trial happens and then partner with larger firms. The company's software platform is called "RADR," which stands for "Response Algorithm for Drug Positioning and Rescue." Not all of this is artificial intelligence. The process starts with choosing which of thousands of genes are likely responsive based on historical statistics about those genes, what is known as "feature selection." That process leads to a shortlist of tens of genes that may be responsive. Lantern takes tissue samples from prospective patients and tests the individuals' unique genetic profiles, looking for the combination of genes that represents a "signature" that may be predictive of drug response.



A Transformation Journey for a Distributed Development Organization

This vision dictates an Agile organization and setup. We’re aware there is no end to bringing better and faster customer value. This is an ongoing journey, where we strive for perfection, sometimes taking major steps, but mostly changing minor things.Those minor things add up and create a difference. One of the biggest challenges in our Agile organization is reusing some of the existing skill sets in teams, while hiring new teams in different locations for the new domains. This approach allows us to use the strengths of the company, but brings a challenge of creating one big team with a single goal. This also makes our experience stronger and more educational for other setups inside and outside the company. Basically, we reuse our existing multi-functional printer (MFP) related technical skills, as well as sales and support skills, and add newly hired teams for new functionalities that do not have any counterpart in the enterprise.


Running Android on PC: A Developer’s Overview


There are a number of different ways to do this, but keep in mind that you can’t run virtual machines on Windows Home; you need the Pro, Enterprise or Education editions. Memu Play is an application that runs an Android emulator; it’s targeted at games. It runs on your Windows PC and integrates mice and keyboard. It’s free but shows adverts. Despite uninstalling Hyper-V from my Windows 10 and enabling VT-x in the BIOS (and rebooting way too many times), I was unable to make Memu Play work (or play, if we want to be punny) because it claimed I was still running Hyper-V. On my wife’s laptop, it ran fine and was very slick. The website has links to many popular Android games that you can download and run. Like Memu Play, Bluestacks is another emulator focused on Android games; moreover, it claims a speed advantage over Android smartphones. It uses Android N (7.1.2). This potentially isn’t a problem, as my experience with Android compared to iOS is there’s longer support for games on older OS versions. GenyMotion takes a different approach, with two offerings targeted at developers: desktop or cloud-based.


The Bug That Got Away

There are countless systems, big and small, that are just riddled with the things. As an engineer I know this very well, as I've contributed to my fair share of them. I've been a software engineer over ten years or so and I've always considered myself to be thorough, especially when it comes to tracking down a bug: the research, the deep diving, and finally: the fix. As with any bug - one of the first steps to fix it, is being able to reproduce it. I spoke with our QA team and they weren't immediately able to reproduce it, but mentioned they would look into it further. Hours pass and I receive another message something to the effect of: QA Person: Rion, I just spun up a fresh new environment and I can reproduce the issue! At this point, I'm excited. I had been fighting with this for over a day and I'm about to dive down the bug fixing rabbit hole on the way to take care of this guy. I log into the new environment, and sure enough, QA was right! I can reproduce it! I should have this thing knocked out in a matter of minutes and my day is saved!


What is WireGuard? Secure, simple VPN still in development

secured vpn tunnel
For one, the WireGuard protocol does away with cryptographic agility -- the concept of offering choices among different encryption, key exchange and hashing algorithms -- as this has resulted in insecure deployments with other technologies. Instead the protocol uses a selection of modern, thoroughly tested and peer-reviewed cryptographic primitives that result in strong default cryptographic choices that users cannot change or misconfigure. If any serious vulnerability is ever discovered in the used crypto primitives, a new version of the protocol is released and there’s a mechanism of negotiating protocol version between peers. WireGuard uses ChaCha20 for symmetric encryption with Poly1305 for message authentication, a combination that’s more performant than AES on embedded CPU architectures that don’t have cryptographic hardware acceleration; Curve25519 for elliptic-curve Diffie-Hellman (ECDH) key agreement; BLAKE2s for hashing, which is faster than SHA-3; and a 1.5 Round Trip Time (1.5-RTT) handshake that’s based on the Noise framework and provides forward secrecy.


How Amazon customer experience became e-commerce standard

Technology can partially bridge the divide and make it more of a fair fight, say analysts, e-commerce cloud vendors and retailers. Different parts of the Amazon customer experience can be replicated in order to meet consumer -- and, increasingly, business -- buying expectations for which Amazon has set the standard. Cloud e-commerce vendors such as BigCommerce, Shopify, Adobe Magento, Salesforce and Oracle must provide customers with payment processing, a shipping network and SEO to infuse their shopping sites with as many elements like the Amazon customer experience as they can. On top of all that, they also must enable the ability to sell on or off Amazon's marketplace. "Amazon set the bar for a lot of the Western world for integrated, end-to-end customer experience," said Des Cahill, Oracle head CX evangelist. "It's come into B2B as well; it's not just a B2C phenomenon. We build into our platform technologies and services that will enable our customers to deliver that same Amazon-like consistency and personalized experiences."


SaaSOps: The next step in Software as a Service evolution

Businessman using mobile smartphone and connecting cloud computing service with icon customer network connection. Cloud device online storage. Cloud technology internet networking concept.
SaaSOps is a result of the explosion of SaaS in the enterprise. The term is new, but the concept has been gaining momentum for quite some time. You may have heard it being referred to as everything from digital workplace ops, to IT operations, to SaaS administration, to cloud office management and end-user computing, just to name a few. But, ultimately, the gist is the same. SaaSOps is a set of disciplines—all the new responsibilities, processes, technologies, and people you need to successfully enable your organization through SaaS. .... SaaSOps ultimately unlocks the potential SaaS can have on any given organization: increased productivity, better collaboration, and a happier workforce. In a world where SaaSOps is widely adopted—which I predict will be in the next 3 to 5 years—users can achieve optimum levels of productivity through SaaS, and IT can effectively manage the proliferation of these best-in-breed applications. When companies first start their SaaS journey, adoption is low.


IT: Managing Choice, Change, Careers in 2020

Decisions
Speaking of strategic value, if IT can’t deliver that, their role risks being diminished or outsourced altogether. This is why managing both choice and change are so important. Having a world-class understanding of cloud technology alone isn’t enough for success career-wise when these other elements are having such a strong impact on the business. Deep technology knowledge may have been sufficient with the PBX since it didn’t have much impact on the business beyond providing dial tone. There was nothing transformational about the PBX, so IT didn’t really need to be concerned beyond providing reliable telephony service. The landscape has shifted dramatically with the cloud, and that shift is key to how IT’s role is changing. To make that point, I’ll return once more to Krapf’s talk, where he shared some findings from Enterprise Connect’s 2018 Salary & Career Survey. Below is a comparison of the skillsets IT believes will be important for career success going forward, along with where their current skills are strongest.



Quote for the day:


"A healthy attitude is contagious but don't wait to catch it from others. Be a carrier." -- Tom Stoppard