Showing posts with label PaaS. Show all posts
Showing posts with label PaaS. Show all posts

Daily Tech Digest - April 02, 2022

PaaS is back: Why enterprises keep trying to resurrect self-service developer platforms

As ever in enterprise IT, it’s a question of control. Or, really, it’s an attempt by organizations to find the right balance between development and operations, between autonomy and governance. No two enterprises will land exactly the same on this freedom continuum, which is arguably why we see every enterprise determined to build its own PaaS/cloud. Hearkening back to Coté’s comment, however, the costs associated with being a snowflake can be high. One solution is simply to enable developer freedom … up to a point. As Leong stressed: “I talk to far too many IT leaders who say, ‘We can’t give developers cloud self-service because we’re not ready for You build it, you run it!’ whereupon I need to gently but firmly remind them that it’s perfectly okay to allow your developers full self-service access to development and testing environments, and the ability to build infrastructure as code (IaC) templates for production, without making them fully responsible for production.” In other words, maybe enterprises needn’t give their developers the keys to the kingdom; the garage will do.


Why EA As A Subject Is A "Must Have" Now Than Ever Before?

Enterprise architecture as a subject and knowledge of reference architecture like IT4ITTM would help EA aspirants appreciate tools for managing a digital enterprise. As students, we know that various organizations are undergoing digital transformation. But hardly do we understand where to start the journey or how to go about the digital transformation if we are left on our own. Knowledge of the TOGAF® Architecture Development Method (ADM) would be a fantastic starting point to answer the abovementioned question. The as-is assessment followed by to-be assessment (or vice versa depending on context) across business, data, application and technology could be a practical starting point. The phase “Opportunities and Solutions” would help get a roadmap of several initiatives an enterprise can choose for its digital transformation. Enterprise Architecture as a subject in b-school would cut across various subjects and help students with a holistic view.


5 steps to minimum viable enterprise architecture

At Carrier Global Corp., CIO Joe Schulz measures EA success by business metrics such as how employee productivity is affected by application quality or service outages. “We don’t look at enterprise architecture as a single group of people who are the gatekeepers, who are more theoretical in nature about how something should work,” says Schulz. He uses reports and insights generated by EA tool LeanIX to describe the interconnectivity of the ecosystem as well the systems capabilities across the portfolio to identify redundancies or gaps. This allows the global provider of intelligent building and cold chain solutions to “democratize a lot of the decision-making…(to) bring all the best thinking and investment capacity across our organization to bear.” George Tsounis, chief technology officer at bankruptcy technology and services firm Stretto, recommends using EA to “establish trust and transparency” by informing business leaders about current IT spending and areas where platforms are not aligned to the business strategy. That makes future EA-related conversations “much easier than if the enterprise architect is working in a silo and hasn’t got that relationship,” he says.


3 strategies to launch an effective data governance plan

Develop a detailed lifecycle for access that covers employees, guests, and vendors. Don’t delegate permission setting to an onboarding manager as they may over-permission or under-permission the role. Another risk with handling identity governance only at onboarding is that this doesn’t address changes in access necessary as employees change roles or leave the company. Instead, leaders of every part of the organization should determine in advance what access each position needs to do their jobs—no more, no less. Then, your IT and security partner can create role-based access controls for each of these positions. Finally, the compliance team owns the monitoring and reporting to ensure these controls are implemented and followed. When deciding what data people need to access, consider both what they’ll need to do with the data and what level of access they need to do their jobs. For example, a salesperson will need full access to the customer database, but may need only read access to the sales forecast, and may not need any access to the accounts payable app.


The Profound Impact of Productivity on Your Soul

Finishing what you set out to do feels great. Have you ever had a rush of satisfaction after checking off that last item on your to-do list? Feeling satisfied and fulfilled about what you are doing is the essence of great productivity. Of course, it means you are getting stuff done, but you are also getting stuff that is actually important and meaningful. ... When we “do,” we share a piece of ourselves with the world. Our work can speak volumes about ourselves. Every time we decide to be productive and take action to complete something, we are embracing our identity and who we are. Being able to choose our efforts and be who we want to be is a rewarding feeling. However, it is also essential to ensure you are doing it for yourself and are not trying to meet someone else’s expectations of you. For example, some younger kids will play sports that they hate to ensure the happiness of their parents. The kids are doing it for their parents, rather than themselves. What happens when you don’t do it for yourself is twofold; First, you become dependent on someone else’s validation. 


Apple and Meta shared data with hackers pretending to be law enforcement officials

Apple and Meta handed over user data to hackers who faked emergency data request orders typically sent by law enforcement, according to a report by Bloomberg. The slip-up happened in mid-2021, with both companies falling for the phony requests and providing information about users’ IP addresses, phone numbers, and home addresses. Law enforcement officials often request data from social platforms in connection with criminal investigations, allowing them to obtain information about the owner of a specific online account. While these requests require a subpoena or search warrant signed by a judge, emergency data requests don’t — and are intended for cases that involve life-threatening situations. Fake emergency data requests are becoming increasingly common, as explained in a recent report from Krebs on Security. During an attack, hackers must first gain access to a police department’s email systems. The hackers can then forge an emergency data request that describes the potential danger of not having the requested data sent over right away, all while assuming the identity of a law enforcement official. 


New algorithm could be quantum leap in search for gravitational waves

Grover's algorithm, developed by computer scientist Lov Grover in 1996, harnesses the unusual capabilities and applications of quantum theory to make the process of searching through databases much faster. While quantum computers capable of processing data using Grover's algorithm are still a developing technology, conventional computers are capable of modeling their behavior, allowing researchers to develop techniques which can be adopted when the technology has matured and quantum computers are readily available. The Glasgow team are the first to adapt Grover's algorithm for the purposes of gravitational wave search. In the paper, they demonstrate how they have applied it to gravitational wave searches through software they developed using the Python programming language and Qiskit, a tool for simulating quantum computing processes. The system the team developed is capable of a speed-up in the number of operations proportional to the square-root of the number of templates. Current quantum processors are much slower at performing basic operations than classical computers, but as the technology develops, their performance is expected to improve.


ID.me and the future of biometric zero trust architecture

Although poorly executed and architected, ID.Me and the IRS were on the right path: biometrics is a great way to verify identity and provides a way to deter fraud. But the second part, the part they missed, is that biometrics only fights fraud if it is deployed in a way that preserves user privacy and doesn’t itself become a new data source to steal. Personal data fraud has become the seemingly unavoidable penalty for the convenience of digital services. According to consumer reporting agency Experian, fraud has increased 33 percent over the past two years, with fraudulent credit card applications being one of the main infractions. Cisco’s 2021 Cybersecurity Threat Trends report finds that at least one person clicked a phishing link in 86 percent of organizations and that phishing accounts for 90 percent of data breaches. It’s hard not to think that storing personal and biometric data of the entire United States tax-paying population in one database wouldn’t become a catalyst for the mother of all data breaches.


GitOps Workflows and Principles for Kubernetes

In essence, GitOps uses the advantages of Git with the practicality and reliability of DevOps best practices. By utilizing things like version control, collaboration and compliance and applying them to infrastructure, teams are using the same approach for infrastructure management as they do for software code, enabling greater collaboration, release speed and accuracy. ... Just like Kubernetes, GitOps is declarative. Git declares the desired state, while GitOps works to achieve and maintain that state; As mentioned above, GitOps creates a single source of truth because everything—from your app code to cluster configurations—is stored, versioned and controlled in Git. GitOps focuses on automation; The approved desired state can be automatically applied and does not require hands-on intervention. Having built-in automated environment testing (the same way you test app code) leverages a familiar workflow used in other places to ensure software quality initiatives are being met before merging to production; GitOps is, in a way, self-regulating. If the application deviates from the desired state, an alert can be raised.


Running legacy systems in the cloud: 3 strategies for success

Teams are capable of learning, but may not be familiar with cloud at the onset of the project. This impacts not only the initial migration but also Day 2 operations and beyond, especially given the velocity of change and new features that the hyperscale platforms — namely Amazon Web Services, Google Cloud Platform, and Microsoft Azure — roll out on a continuous basis. Without the necessary knowledge and experience, teams struggle to optimize their legacy system for cloud infrastructure and resources — and then don’t attain the full capabilities of these platforms. ... No one gains a competitive advantage from worrying about infrastructure these days; they win with a laser focus on transforming their applications and their business. That’s a big part of cloud’s appeal – it allows companies to do just that because it effectively takes traditional infrastructure concerns off their plates. You can then shift your focus to business impacts of the new technologies at your disposal, such as the ability to extract data from a massive system like SAP and integrate with best-of-breed data analytics tooling for new insights.



Quote for the day:

"A friend of mine characterizes leaders simply like this : "Leaders don't inflict pain. They bear pain." -- Max DePree

Daily Tech Digest - June 13, 2021

The race is on for quantum-safe cryptography

Existing encryption systems rely on specific mathematical equations that classical computers aren’t very good at solving — but quantum computers may breeze through them. As a security researcher, Chen is particularly interested in quantum computing’s ability to solve two types of math problems: factoring large numbers and solving discrete logarithms. Pretty much all internet security relies on this math to encrypt information or authenticate users in protocols such as Transport Layer Security. These math problems are simple to perform in one direction, but difficult in reverse, and thus ideal for a cryptographic scheme. “From a classical computer’s point of view, these are hard problems,” says Chen. “However, they are not too hard for quantum computers.” In 1994, the mathematician Peter Shor outlined in a paper how a future quantum computer could solve both the factoring and discrete logarithm problems, but engineers are still struggling to make quantum systems work in practice. While several companies like Google and IBM, along with startups such as IonQ and Xanadu, have built small prototypes, these devices cannot perform consistently, and they have not conclusively completed any useful task beyond what the best conventional computers can achieve.


Lightbend’s Akka Serverless PaaS to Manage Distributed State at Scale

Up to now, serverless technology has not been able to support stateful, high-performance, scalable applications that enterprises are building today, Murdoch said. Examples of such applications include consumer and industrial IoT, factory automation, modern e-commerce, real-time financial services, streaming media, internet-based gaming and SaaS applications. “Stateful approaches to serverless application design will be required to support a wide range of enterprise applications that can’t currently take advantage of it, such as e-commerce, workflows and anything requiring a human action,” said William Fellows, research director for cloud native at 451 Research. “Serverless functions are short-lived and lose any ‘state’ or context information when they execute.” Lightbend, with Akka Serverless, has addressed the challenge of managing distributed state at scale. “The most significant piece of feedback that we’ve been getting from the beta is that one of the key things that we had to do to build this platform was to find a way to be able to make the data be available in memory at runtime automatically, without the developer having to do anything,” Murdoch said


Can We Balance Accuracy and Fairness in Machine Learning?

While challenges like these often sound theoretical, they already affect and shape the work that machine learning engineers and researchers produce. Angela Shi looks at a practical application of this conundrum when she explains the visual representation of bias and variance in bulls-eye diagrams. Taking a few steps back, Federico Bianchi and Dirk Hovy’s article identifies the most pressing issues the authors and their colleagues face in the field of natural learning processing (NLP): “the speed with which models are published and then used in applications can exceed the discovery of their risks and limitations. And as their size grows, it becomes harder to reproduce these models to discover those aspects.” Federico and Dirk’s post stops short of offering concrete solutions—no single paper could—but it underscores the importance of learning, asking the right (and often most difficult) questions, and refusing to accept an untenable status quo. If what inspires you to take action is expanding your knowledge and growing your skill set, we have some great options for you to choose from this week, too.


The secret of making better decisions, faster

While agility might be critical for sporting success, that doesn't mean it's easily achieved. Filippi tells ZDNet he's spent many years building a strong team, with great heads of department who are empowered to make big calls. "Most of the time you trust them to get on with it," he says. "I'm more of an orchestrator – you cannot micromanage a race team because there's just too much going on. The pace and the volume of work being achieved every week is just mind-blowing." Hackland has similar experiences at Williams F1. Employees are empowered to take decisions and their confidence to make those calls in the factory or out on the track is a crucial component of success. "The engineer who's sitting on the pit wall doesn't have to ask the CIO if we should pit," he says. "The decisions that are made all through the organisation don't feed up to one single individual. Everyone is allowed to make decisions up or down the organisation." As well as being empowered to make big calls, Hackland says a no-blame culture is critical to establishing and supporting decentralised decision making in racing teams.


How to avoid the ethical pitfalls of artificial intelligence and machine learning

Disconnects also exist between key functional stakeholders required to make sound holistic judgements around ethics in AI and ML. “There is a gap between the bit that is the data analytics AI, and the bit that is the making of the decision by an organisation. You can have really good technology and AI generating really good outputs that are then used really badly by humans, and as a result, this leads to really poor outcomes,” says Prof. Leonard. “So, you have to look not only at what the technology in the AI is doing, but how that is integrated into the making of the decision by an organisation.” This problem exists in many fields. One field in which it is particularly prevalent is digital advertising. Chief marketing officers, for example, determine marketing strategies that are dependent upon the use of advertising technology – which are in turn managed by a technology team. Separate to this is data privacy which is managed by a different team, and Prof. Leonard says each of these teams don’t speak the same language as each other in order to arrive at a strategically cohesive decision. 


Five types of thinking for a high performing data scientist

As data scientists, the first and foremost skill we need is to think in terms of models. In its most abstract form, a model is any physical, mathematical, or logical representation of an object, property, or process. Let’s say we want to build an aircraft engine that will lift heavy loads. Before we build the complete aircraft engine, we might build a miniature model to test the engine for a variety of properties (e.g., fuel consumption, power) under different conditions (e.g., headwind, impact with objects). Even before we build a miniature model, we might build a 3-D digital model that can predict what will happen to the miniature model built out of different materials. ... Data scientists often approach problems with cross-sectional data at a point in time to make predictions or inferences. Unfortunately, given the constantly changing context around most problems, very few things can be analyzed statically. Static thinking reinforces the ‘one-and-done’ approach to model building that is misleading at best and disastrous at its worst. Even simple recommendation engines and chatbots trained on historical data need to be updated on a regular basis. 


Double Trouble – the Threat of Double Extortion Ransomware

Over the past 12 months, double extortion attacks have become increasingly common as its ‘business model’ has proven effective. The data center giant Equinix was hit by the Netwalker ransomware. The threat actor behind that attack was also responsible for the attack against K-Electric, the largest power supplier in Pakistan, demanding $4.5 million in Bitcoin for decryption keys and to stop the release of stolen data. Other companies known to have suffered such attacks include the French system and software consultancy Sopra Steria; the Japanese game developer Capcom; the Italian liquor company Campari Group; the US military missile contractor Westech; the global aerospace and electronics engineering group ST Engineering; travel management giant CWT, who paid $4.5M in Bitcoin to the Ragnar Locker ransomware operators; business services giant Conduent; even soccer club Manchester United. Research shows that in Q3 2020, nearly half of all ransomware cases included the threat of releasing stolen data, and the average ransom payment was $233,817 – up 30% compared to Q2 2020. And that’s just the average ransom paid.


Evolution of code deployment tools at Mixpanel

Manual deploys worked surprisingly well while we were getting our services up and running. More and more features were added to mix to interact not just with k8s but also other GCP services. To avoid dealing with raw YAML files directly, we moved our k8s configuration management to Jsonnet. Jsonnet allowed us to add templates for commonly used paradigms and reuse them in different deployments. At the same time, we kept adding more k8s clusters. We added more geographically distributed clusters to run the servers handling incoming data to decrease latency perceived by our ingestion API clients. Around the end of 2018, we started evaluating a European Data Residency product. That required us to deploy another full copy of all our services in two zones in the European Union. We were now up to 12 separate clusters, and many of them ran the same code and had similar configurations. While manual deploys worked fine when we ran code in just two zones, it quickly became infeasible to keep 12 separate clusters in sync manually. Across all our teams, we run more than 100 separate services and deployments. 


When physics meets financial networks

Generally, physics and financial systems are not easily associated in people's minds. Yet, principles and techniques originating from physics can be very effective in describing the processes taking place on financial markets. Modeling financial systems as networks can greatly enhance our understanding of phenomena that are relevant not only to researchers in economics and other disciplines, but also to ordinary citizens, public agencies and governments. The theory of Complex Networks represents a powerful framework for studying how shocks propagate in financial systems, identifying early-warning signals of forthcoming crises, and reconstructing hidden linkages in interbank systems. ... Here is where network theory comes into play, by clarifying the interplay between the structure of the network, the heterogeneity of the individual characteristics of financial actors and the dynamics of risk propagation, in particular contagion, i.e. the domino effect by which the instability of some financial institutions can reverberate to other institutions to which they are connected. The associated risk is indeed "systemic", i.e. both produced and faced by the system as a whole, as in collective phenomena studied in physics.


What’s Driving the Surge in Ransomware Attacks?

The trend involves a complex blend of geopolitical and cybersecurity factors, but the underlying reasons for its recent explosion are simple. Ransomware attacks have gotten incredibly easy to execute, and payment methods are now much more friendly to criminals. Meanwhile, businesses are growing increasingly reliant on digital infrastructure and more willing to pay ransoms, thereby increasing the incentive to break in. As the New York Times notes, for years “criminals had to play psychological games to trick people into handing over bank passwords and have the technical know-how to siphon money out of secure personal accounts.” Now, young Russians with a criminal streak and a cash imbalance can simply buy the software and learn the basics on YouTube tutorials, or by getting help from syndicates like DarkSide — who even charge clients a fee to set them up to hack into businesses in exchange for a portion of the proceeds. The breach of the education publisher involving the false pedophile threat was a successful example of such a criminal exchange. Meanwhile, Bitcoin has made it much easier for cybercriminals to collect on their schemes.



Quote for the day:

"To make a decision, all you need is authority. To make a good decision, you also need knowledge, experience, and insight." -- Denise Moreland

Daily Tech Digest - March 28, 2021

Why risk assessment is important for financial institutions in a digital era

Given that financial institutions are custodians of significant amounts of third-party data, much of which is personal and sensitive, it is imperative now more than ever to manage and assess the risks and their impact on the existing ecosystem to drive optimum value from their digital initiatives. The risks are indeed multiplied where data is involved. With the ubiquity of online banking apps and services, the likelihood of a breach is almost certain at some point and that is when banks must be prepared. As the cadence of cyberattacks increase, organisations can no longer hide internal dysfunction from external stakeholders. “[When] an inevitable breach, audit or Royal Commission happens, financial institutions will only survive the exposure if they can show that they have actually taken all reasonable steps to protect themselves,” Greaves said. Being in control of the high-risk data must be the first step in mitigating these risks. “The key to treating information risk is to have full control of that information. If an institution is unfamiliar with what data it has, who is doing what to it, and where and how it is stored within its systems, it will be unable to control it or protect it,” Greaves said.


How HR Leaders Are Preparing for the AI-Enabled Workforce

Predicting the nature of future jobs is, of course, difficult or impossible to do with precision. And even if predictions are possible, they will probably differ substantially from job to job. Nevertheless, some companies are embarking on approaches that predict the future of either all jobs in the organization, those that are particularly likely to be affected by AI, or jobs that are closely tied to future strategies. ... Some companies are making specific job predictions based on their strategies or products. In Europe, a consortium of microelectronics companies is devoting 2 billion euros to train current and future employees on electronic components and systems. General Motors is focused on training its employees to manufacture electric and autonomous vehicles. Verizon is focused on hiring and training data scientists and marketers to expand its 5G wireless technology. SAP is focused on growing employees’ skills in cloud computing, artificial intelligence development, blockchain, and the internet of things. The raging bull of machine learning has turned out to be slower and calmer than many people predicted a few years ago. But any rancher knows you should never turn your back on a bull, no matter how docile it seems.


Software Defined Everything Part 6: Infrastructure

SDxI must understand the appropriate context of users, applications, devices, and locations related to the creation of a virtual machine, container, or even a data flow or set of network attributes such as source/destination addresses and tags. Advanced infrastructure needs to be able to provide data gathering on context-relevant metrics for debugging, security and audit, performance management, and billing and marketing. Historically, context-awareness was the purview of specialized point products such as networking devices (primarily Layers 4-7) that directed and processed traffic based on rules and inspecting incoming data. But this processing only occurs at specific points in the infrastructure. SDxI applications are more demanding and need holistic context-awareness across networking, compute, and storage to optimize workload placement in context of what a user, device, or app is trying to accomplish. For example, efforts are underway to add contextual-driven automation to both private and public cloud environments via OpenStack Heat. In this model, external context-based triggers drive VMs and their computer, storage, and network resources to spin up or down to maximize performance, minimize latency, or meet appropriate business objectives.


5 fintech trends to watch out for in 2021

While digital banking has been around long before the pandemic, it spiked in usage amidst the pandemic. Research shows that about 50 per cent of consumers are using digital banking products more since the pandemic, with 87 per cent of them planning to continue this increased usage after the pandemic. This shows that digital banking has evolved from a “nice-to-have” to a “must-have” solution for consumers and businesses. However, despite the convenience in use that digital banking offers, many consumers are still weary of the dangers that digital banking solutions bring. ... Just as self-service solutions have become rampant during the pandemic to avoid possible infection, autonomous finance is expected to rise in 2021 as well. Several fintech solutions today make it possible for people to manage their money, open accounts, apply for loans, and more with just a click of a button. Thanks to AI and machine learning, these solutions are now more accessible than lining up in traditional banks and going through tedious processes. ... Bitcoin’s rising price is due to various reasons, some of which include growing institutional interest, usage as a hedge against inflation, and PayPal’s official entrance in the crypto scene.


Navigating Data Security Within Data Sharing In Today’s Evolving Landscape

Successful cross-enterprise data strategies bring a unified approach to data integration, quality, governance, and data sharing. Innovation is not through a set of siloed products. It is a single platform that moves and manages different types of data under one roof. To create a successful data management strategy and avoid any data security mishaps, chief data officers (CDOs) and their teams should start by setting up governance and establishing business rules and system controls for access. CDOs report the most success when their data sharing architecture is built on microservices that answer business questions. That is, what data is needed to provide insights into the most difficult business problems? For example, the CDO of a large Internet-based home furnishing company recently shared that when they treat data integration as a business transformation project, they receive better requirements about business needs, data security and data trust, more focus from stakeholders, and broader adoption across the organization and within roles. Another best practice approach that both encourages sharing while also only labeling trusted, vetted data sources is the concept of certified versus uncertified data sets.


Factorized layers revisited: Compressing deep networks without playing the lottery

The key principle underlying these two natural methods, neither of which requires extra hyperparameters, is that the training behavior of a factorized model should mimic that of the original (unfactorized) network. We further demonstrate the usefulness of these schemes in two settings beyond model compression where factorized neural layers are applied. The first is an exciting new area of knowledge distillation in which an overcomplete factorization is used to replace the complicated and expensive student-teacher training phase with a single matrix multiplication at each layer. The second is for training Transformer-based architectures such as BERT, which are popular models for learning over sequences like text and genomic data and whose multi-head self-attention mechanisms are also factorized neural layers. Our work is part of Microsoft Research New England’s AutoML research efforts, which seek to make the exploration and deployment of state-of-the-art machine learning easier through the development of models that help automate the complex processes involved.


The Taboo Of Remote Working And Hiring In India

Barring some of India’s major cities, good Internet connectivity is still the stuff of dreams. Fighting bugs while your strongest warrior is out cold due to poor connectivity is every CTO’s worst nightmare. It’s just not about dire circumstances though; many young developers live in shared accommodations without a personal space to focus on work. Remote work can succeed only with the implicit understanding that work time at home is as focused as work time in the office. In addition to poor Internet, the lack of facilities such as a good work desk, and a well-lit room also hamper productivity. One of the prime reasons why developers are aching to come back to office is because coding while in bed and in your PJs has an early expiry date. Then there’s connection. Indian workplaces have traditionally depended more on verbal communication than written documentation. We’d rather walk up to someone and provide feedback than write it up in precise points in an email. With remote work, both developers and managers need to adopt a different cadence of verbal and written communication that is direct and constructive.


Behavioral Psychology Might Explain What’s Holding Boards Back

Boards can only be effective if they have the ability to come to a consensus. No one wants to feel that the board is made up of factions with irreconcilable differences. Even when the board undergoes a shake-up, like the addition of an activist director, they tend to quickly reach a new equilibrium. But while consensus-building is important, boards may be too inclined to seek harmony or conformity. This can lead to groupthink, where dissenting views are not welcomed or entertained. In fact, while most boards work to solicit a range of views and come to a consensus on key issues, 36% of directors say it is difficult to voice a dissenting view on at least one topic in the boardroom. This can point to dysfunctional decision-making as the board members avoid making waves. In fact, the most common reason that directors cite for stifled dissent on their boards is the desire to maintain collegiality among their peers. Groupthink is also magnified when the board is not effectively educated on a topic, or does not have access to the right information. Board materials may come too late for members to have any real time to review and reflect on the information before a meeting.


Digital transformation: This is why CIOs need to stay brave and keep on innovating

Hackland recognises that it can be difficult for CIOs to gain funding for innovative projects, especially in organisations with competing priorities. But when there's a chance to try something new, the opportunity must be grabbed – not just in terms of the potential benefits it might bring to the company itself but also in terms of professional development. "You're learning and your people are learning," says Hackland, referring to the importance of experimentation. "They're engaged in something new, they're not just doing lights-on, which I think is really important. They're getting to play with new technologies." Which brings us back to Williams' recent foray into virtual reality, which was one such attempt to try something new. The intention was to allow users of a bespoke VR app to view and manipulate the new car in its livery in 3D. The app, which was created by an external agency, was made available for fans to download on the Apple App Store and Google Play Store. However, when pictures of the FW43B started appearing online, the team couldn't be sure if only the image data for the new car had been unpacked or whether the app itself had been compromised.


Platform Engineering As A (Community) Service

At its core, platform engineering is all about building, well, a platform. In this context, I mean an internal platform within an organisation, not a general business platform for external consumers. This platform serves as a foundation for other engineering teams building products and systems on top of it for end users. Concrete goals include: Improving developer productivity and efficiency - through things like tooling, automation and infrastructure-as-code; Providing consistency and confidence around complex cross cutting areas of concerns - such as security and reliable auto scaling; Helping organisations to grow teams in a sustainable manner to meet increased business demands. Matthew Skelton concisely defines a platform as “a curated experience for engineers (the customers of the platform)”. This phrase “curated experience” very nicely encapsulates the essence of what I have come to recognise and appreciate as being a crucial differentiator for successful platforms. Namely, it’s not just about one technology solving all your problems. Nor is it about creating a wrapper around a bunch of tech.



Quote for the day:

“Nobody talks of entrepreneurship as survival, but that’s exactly what it is.” -- Anita Roddick

Daily Tech Digest - December 11, 2020

5 signs your agile development process must change

Agile teams figure out fairly quickly that polluting a backlog with every idea, request, or technical issue makes it difficult for the product owner, scrum master, and team to work efficiently. If teams maintain a large backlog in their agile tools, they should use labels or tags to filter the near-term versus longer-term priorities. An even greater challenge is when teams adopt just-in-time planning and prioritize, write, review, and estimate user stories during the leading days to sprint start. It’s far more difficult to develop a shared understanding of the requirements under time pressure. Teams are less likely to consider architecture, operations, technical standards, and other best practices when there isn’t sufficient time dedicated to planning. What’s worse is that it’s hard to accommodate downstream business processes, such as training and change management if business stakeholders don’t know the target deliverables or medium-term roadmap. There are several best practices to plan backlogs, including continuous agile planning, Program Implement planning, and other quarterly planning practices. These practices help multiple agile teams brainstorm epics, break down features, confirm dependencies, and prioritize user story writing.


How to Align DevOps with Your PaaS Strategy

Some organizations are adopting a multi-PaaS strategy which typically takes the form of developing an application on one PaaS and deploying it to multiple public clouds. However, not all PaaS provide that capability. One reason to deploy to multiple clouds is increase application reliability. Despite SLAs, outages may occur from time to time. Alternatively, different applications may require the use of different PaaS because the PaaS services vary from vendor to vendor. However, more vendors mean more complexity to manage. "Tomorrow, your business transaction is going to be going over SaaS services provided by multiple vendors so I might have to orchestrate across multiple clouds, multiple vendors to complete my business transaction," said Chennapragada. "Tying myself [to] a vendor is going to constrain me from orchestrating, so our clients are thinking of a more cloud-agnostic, vendor-agnostic solution." One of the general concerns some organizations have is whether they have the expertise to manage everything themselves, which has led to a huge proliferation of managed service providers. That way, DevOps teams have more time to focus on product development and delivery. PaaS expertise can be difficult to find because PaaS skills are niche skills. 


Low Code: CIOs Talk Challenges and Potential

CIO viewpoints honestly differed. For example, CIO Milos Topic suggests “it is still early in experimentation in our environment, but it is mostly useful in automating and provisioning repetitive processes and modules. But it is essential to stress that low code doesn't mean hands off.” Meanwhile, CIO David Seidl says “the adoption is big because of the ability to make more responsive changes. The trade-off is interesting. The open question is: can you remove one of the cost layers (maintaining code) and trade it for business logic and platform maintenance? And how do you minimize platform maintenance and could cloud services help. The big question is: do we consider business logic code? It can be just as complex to build and debug complex business logic in a drag and drop as traditional code. So, you win on the UI/layout/integration components, but core code remains an open question.” However, CIO Deb Gildersleeve suggests that low code gives business users without technical coding expertise the tools to solve their problems. It takes the burden outside of IT but can be provided with guardrails for security governance.”


Security Think Tank: Integration between SIEM/SOAR is critical

Security operations teams will have a playbook which details the decisions and actions to be taken from detection to containment. This may suggest actions to be taken on detection of a suspicious event through escalation and possible responses. SOAR can automate this, taking autonomous decisions that support the investigation, drawing in threat intelligence and presenting the results to the analyst with recommendations for further action. The analyst can then select the appropriate action, which would be carried out automatically, or the whole process can be automated. For example, the detection of a possible command and control transmission could be followed up in accordance with the playbook to gather relevant threat intelligence and information on which hosts are involved and other related transmissions. The analyst would then be notified and given the option to block the transmissions and isolate the hosts involved. Once selected, the actions would be carried out automatically. Throughout the process, ticketing and collaboration tools would keep the team and relevant stakeholders informed and generate reports as required.


Low-Code To Become Commonplace in 2021

The citizen developer concept has been gathering marketing steam, but it might not be just hype. Now, data suggests low-code tools are actually opening doors for such non-developers. Seventy percent of companies said non-developers in their company already build tools for internal business use, and nearly 80% predict to see more of this trend in 2021. It should be noted that low-code and no-code do not seek to replace all engineering talent; instead, to free them up to engage in more complex tasks. “With low-code, you free up your engineers to work on harder problems, instead of having them work on basic things,” said Arisa Amano, CEO of Internal. She believes this could translate into more innovation companywide. Surprisingly, bringing non-traditional engineers into the development fold is not being met with ambivalence—69.2% of respondents foresee that citizen developers positively affect engineering teams, with the rest primarily exhibiting a neutral reaction. The costs of internal security threats are high. Breaches could decrease customer trust, harm brand reputation and lead to escalating legal fees. With cyberattacks a prevalent concern, cybersecurity must come back in style.


People want data privacy but don’t always know what they’re getting

In practice, differential privacy isn’t perfect. The randomization process must be calibrated carefully. Too much randomness will make the summary statistics inaccurate. Too little will leave people vulnerable to being identified. Also, if the randomization takes place after everyone’s unaltered data has been collected, as is common in some versions of differential privacy, hackers may still be able to get at the original data. When differential privacy was developed in 2006, it was mostly regarded as a theoretically interesting tool. In 2014, Google became the first company to start publicly using differential privacy for data collection. Since then, new systems using differential privacy have been deployed by Microsoft, Google and the U.S. Census Bureau. Apple uses it to power machine learning algorithms without needing to see your data, and Uber turned to it to make sure their internal data analysts can’t abuse their power. Differential privacy is often hailed as the solution to the online advertising industry’s privacy issues by allowing advertisers to learn how people respond to their ads without tracking individuals. But it’s not clear that people who are weighing whether to share their data have clear expectations about, or understand, differential privacy.


Widespread malware campaign seeks to silently inject ads into search results

The malware makes changes to certain browser extensions. On Google Chrome, the malware typically modifies “Chrome Media Router”, one of the browser’s default extensions, but we have seen it use different extensions. Each extension on Chromium-based browsers has a unique 32-character ID that users can use to locate the extension on machines or on the Chrome Web store. On Microsoft Edge and Yandex Browser, it uses IDs of legitimate extensions, such as “Radioplayer” to masquerade as legitimate. As it is rare for most of these extensions to be already installed on devices, it creates a new folder with this extension ID and stores malicious components in this folder. On Firefox, it appends a folder with a Globally Unique Identifier (GUID) to the browser extension. ... Despite targeting different extensions on each browser, the malware adds the same malicious scripts to these extensions. In some cases, the malware modifies the default extension by adding seven JavaScript files and one manifest.json file to the target extension’s file path. In other cases, it creates a new folder with the same malicious components. These malicious scripts connect to the attacker’s server to fetch additional scripts, which are responsible for injecting advertisements into search results.


Penetration Testing: A Road Map for Improving Outcomes

Traditional penetration testing is a core element of many organizations' cybersecurity efforts because it provides a reliable measurement of the organization's security and defense measures. However, because a client can classify assets as out of scope, the pen test may not give an accurate read on the organization's full security posture. Because the pen-testing approach, authorization process, and testing ranges are defined in advance, these assessments may not measure an organization's true ability to identify and act on suspicious activities and traffic. Ultimately, placing restrictions on a test's scope or duration can harm the tested organization. In the real world, neither time nor scope are of any consideration to attackers, meaning the results of such a test are not entirely reliable. Incorporating objective-oriented penetration testing can improve typical pen-testing systems and, in turn, enhance an organization's security posture and incident response, as well as limit their risk of exposure. The first step is to agree on attackers' likely objectives and a reasonable time frame. For example, consider ways attackers could access and compromise customer data or gain access to a high-security network or physical location. 


Facial recognition's fate could be decided in 2021

Several lawsuits filed in 2020 that could see resolution next year may also have an impact on facial recognition. Clearview AI is facing multiple lawsuits about its data collection. The company collected billions of public images from social networks including YouTube, Facebook and Twitter. All of those companies have sent a cease-and-desist letter to Clearview AI, but the company maintains that it has a First Amendment right to take these images. That argument is being challenged by Vermont's attorney general, the American Civil Liberties Union and two lawsuits in Illinois. Clearview AI didn't respond to requests for comment. The Clearview decision could play a role in facial recognition's future. The industry relies on hordes of images of people, which it gets in many ways. An NBC News report in 2019 called it a "dirty little secret" that millions of photos online have been getting collected without people's permission to train facial recognition algorithms. "We're likely to also see growing amounts of litigation against schools, businesses and other public accommodations under a new wave of biometric privacy laws, including New York City's forthcoming ban on commercial biometric surveillance," said the Surveillance Technology Oversight Project's Cahn.


Hacking Group Dropping Malware Via Facebook, Cloud Services

While the newly discovered DropBook backdoor uses fake Facebook accounts for its command-and-control operations, the report notes that both SharpStage and DropBook utilize Dropbox to exfiltrate the data stolen from their targets, as well as for storing espionage tools, according to the report. Once a device is compromised, the SharpStage backdoor can capture screenshots, check for Arabic language presence in the victims' device for precision targeting and download and execute additional components. DropBook, on the other hand, is used for reconnaissance and to deploy shell commands, the report notes. The attackers use MoleNet to collect system information from the compromised devices, communicate with the command-and-control servers and maintain persistence, according to the report. Besides the new backdoor components, researchers note the hackers deployed an open-source remote access Trojan called Quasar, which was previously linked to a Molerats campaign in 2017. Cybereason researchers note that once the DropBook malware is in the victims' devices, it begins its operation by fetching a token from a post on a fake Facebook account.



Quote for the day:

"Example has more followers than reason. We unconsciously imitate what pleases us, and approximate to the characters we most admire." -- Christian Nestell Bovee

Daily Tech Digest - February 28, 2020

Google says Microsoft Edge isn't secure.

edge-browser-screen.jpg
"Google recommends switching to Chrome to use extensions securely," says a pop-up. Oh, so Edge is insecure? That's terrible. Oddly, when I tried the browser, I found it a touch faster and privacy-friendlier than Google's. It didn't seem so insecure. Why would Google be so worried on my behalf? Worse, Techdows reported that Google is also offering more desperate warnings for users of Google Docs, Google News and Google Translate. The essential message: don't pair these with Edge. This verged on terrible mean-spiritedness, I feared. After all, Edge is based on Google's own Chromium platform. Just as I was about to punish Google by using Bing for a day, another piece of troubling information assaulted me. According to PC World, Microsoft is apparently telling those who use Edge and go to the Chrome web store to get an extension: "Extensions installed from sources other than the Microsoft Store are unverified, and may affect browser performance." Can't we rely on anything these days? Naturally, I instantly contacted Google to ask in what way Edge was insecure. Without pausing for breath or to curse at the new space bar issues with my MacBook Air, I asked Microsoft why extensions from the Chrome store might make Edge a little edgy.



Multi-Runtime Microservices Architecture

Multi-Runtime Microservices Architecture
One of the well-known traditional solutions satisfying an older generation of the above-listed needs is the Enterprise Service Bus and its variants, such as Message Oriented Middleware, lighter integration frameworks, and others. An ESB is a middleware that enables interoperability among heterogeneous environments using a service-oriented architecture. While an ESB would offer you a good feature set, the main challenge with ESBs was the monolithic architecture and tight technological coupling between business logic and platform, which led to technological and organizational centralization. When a service was developed and deployed into such a system, it was deeply coupled with the distributed system framework, which in turn limited the evolution of the service. This often only became apparent later in the life of the software. Here are a few of the issues and limitations of each category of needs that makes ESBs not useful in the modern era. In traditional middleware, there is usually a single supported language runtime, which dictates how the software is packaged, what libraries are available, how often they have to be patched, etc.


Intel takes aim at Huawei 5G market presence

note 10 5g node
Intel on Monday introduced a raft of new processors, and while updates to the Xeon Scalable lineup led the parade, the real news is Intel's efforts to go after the embattled Huawei Technologies in the 5G market. Intel unveiled its first ever 5G integrated chip platform, the Atom P5900, for use in base stations. Navin Shenoy, executive vice president and general manager of the data platforms group at Intel, said the product is designed for 5G's high bandwidth and low latency and combines compute, 100Gb performance and acceleration into a single SoC. "It delivers a performance punch in packet security throughput, and improved packet balancing throughput versus using software alone," Shenoy said in the video accompanying the announcement. Intel claims the dynamic load balancer native to the Atom P5900 chip is 3.7 times more efficient at packet balancing throughput than software alone. Shenoy said Ericsson, Nokia, and ZTE have announced that they will use the Atom P5900 in their base stations. Intel hopes to be the market leader for silicon base station chips by 2021, aiming for 40% of the market and six million 5G base stations by 2024.


Can Deutsche Bank’s PaaS help turn the bank around?

Can Deutsche Bank’s PaaS help turn the bank around?
It’s a rapid success story for a highly leveraged and highly regulated international bank – which is in the midst of a turnaround effort and that registered a loss of €5.7 billion ($7.4 billion) last year – and one that even has management considering whether Fabric is good enough to sell to rival banks to eventually turn its technology investments into a revenue stream. A key problem Fabric helped solve was one that confronted the bank’s new leadership when it arrived in 2015: a sizeable virtual machine (VM) estate that was only being utilised at a rate of around eight percent. “The CIOs got together and realised they had a problem to fix because this is just money that’s bleeding out to the organisation,” platform-as-a-service product owner at Deutsche Bank, Emma Williamson, said during a recent Red Hat OpenShift Commons event in London. So the bank set out to drastically modernise its application estate around cloud native technologies like containers and Kubernetes, all with the aim of cutting this waste tied to its legacy platforms and help drive a broader shift towards the cloud.


The 9 Best Free Online Data Science Courses In 2020

The 9 Best Free Online Data Science Courses In 2020
You don't have to spend a fortune and study for years to start working with big data, analytics, and artificial intelligence. Demand for "armchair data scientists" – those without formal qualifications in the subject but with the skills and knowledge to analyze data in their everyday work, is predicted to outstrip demand for traditionally qualified data scientists in the coming years. ... Some of these might require payment at the end of the course if you want official certification or accreditation of completing the course, but the learning material is freely available to anyone who wants to level up their data knowledge and skills. ... As it is a Microsoft course, its cloud-based components focus on the company's Azure framework, but the concepts that are taught are equally applicable in organizations that are tied to competing cloud frameworks such as AWS. It assumes a basic understanding of R or Python, the two most frequently used programming languages in data science, so it may be useful to look at one of the courses covering those that are mentioned below, first.


Microsoft Makes Progress on PowerShell Secrets Management Module

The idea behind the module is that it has been difficult for organizations to manage secrets securely, especially when running scripts across heterogeneous cloud environments. Developers writing scripts want them to run across different platforms, but that might involve handling multiple secrets and multiple secrets types. The team sees PowerShell serving as a connection point between different systems. Consequently, it built an abstraction layer in PowerShell that can be used to manage secrets, both with local vaults and remote vaults, Smith explained in a November Ignite talk. The module helps manage local and remote secrets in unified way, Smith added. It might be used to run a script in various environments, where just the vault parameter would need to be changed. Scripts could be shared across an organization, and it wouldn't be necessary to know the local vaults of the various users. Keys could be shared with users in test environments, but deployment keys could be individualized. It would be less necessary to hard-code secrets into scripts. The PowerShell Secret Management Module is being designed to work with various vault extensions.


Facebook sues SDK maker for secretly harvesting user data

Facebook website
According to court documents obtained by ZDNet, the SDK was embedded in shopping, gaming, and utility-type apps, some of which were made available through the official Google Play Store. "After a user installed one of these apps on their device, the malicious SDK enabled OneAudience to collect information about the user from their device and their Facebook, Google, or Twitter accounts, in instances where the user logged into the app using those accounts," the complaint reads. "With respect to Facebook, OneAudience used the malicious SDK – without authorization from Facebook – to access and obtain a user's name, email address, locale (i.e. the country that the user logged in from), time zone, Facebook ID, and, in limited instances, gender," Facebook said. Twitter was the first to expose OneAudience's secret data harvesting practices on November 26, last year. Facebook confirmed on the same day. In a blog post at the time, Twitter also confirmed that beside itself and Facebook, the data harvesting behavior also targeted the users of other companies, such as Apple and Google.


Product Development with Continuous Delivery Indicators

Data-Driven Decision Making – Product Development with Continuous Delivery Indicators
Software product delivery organizations deliver complex software systems on an evermore frequent basis. The main activities involved in the software delivery are Product Management, Development and Operations (by this we really mean activities as opposed to separate siloed departments that we do not recommend). In each of the activities many decisions have to be made fast to advance the delivery. In Product Management, the decisions are about feature prioritization. In Development, it is about the efficiency of the development process. And in Operations, it is about reliability. The decisions can be made based on the experience of the team members. Additionally, the decisions can be made based on data. This should lead to a more objective and transparent decision making process. Especially with the increasing speed of the delivery and the growing number of delivery teams, an organization’s ability to be transparent is an important means for everyone’s continuous alignment without time-consuming synchronization meetings.


Can Machines And Artificial Intelligence Be Creative?

Can Machines And Artificial Intelligence Be Creative?
If AI can enhance creativity in visual art, can it do the same for musicians? David Cope has spent the last 30 years working on Experiments in Musical Intelligence or EMI. Cope is a traditional musician and composer but turned to computers to help get past composer’s block back in 1982. Since that time, his algorithms have produced numerous original compositions in a variety of genres as well as created Emily Howell, an AI that can compose music based on her own style rather than just replicate the styles of yesterday’s composers. In many cases, AI is a new collaborator for today’s popular musicians. Sony's Flow Machine and IBM's Watson are just two of the tools music producers, YouTubers, and other artists are relying on to churn out today's hits. Alex Da Kid, a Grammy-nominated producer, used IBM’s Watson to inform his creative process. The AI analyzed the "emotional temperature" of the time by scraping conversations, newspapers, and headlines over a five-year period. Then Alex used the analytics to determine the theme for his next single.


Educating Educators: Microsoft's Tips for Security Awareness Training

The key challenge is creating an engaging, relatable training course that effectively teaches employees the concepts they need to know, Sexsmith said. Sexsmith pointed to a few tricks he uses in his programs. One of these is the "Social Proof Theory," a social and psychological concept that describes how people copy other people's behavior – if your colleagues are doing a training, you'll do it, too. Gamification also helps: "People want to learn; people want to master skills, but there's also a competitive nature around that," he said. Some trainings use videos that make security concepts more accessible. One problem, he said, is lessons that aren't reinforced aren't retained. Humans forget half of new information learned within an hour and 70% of new information within a day. "By lunchtime, you're going to forget 50% of the stuff I'm up here saying," he joked to his morning audience. To fight this, Microsoft uses a training reinforcement platform called Elephants Don't Forget to help employees build muscle memory around new concepts. During the gap between trainings, the program sends participants two daily emails with a link to questions tailored to the course.



Quote for the day:


"Eventually relationships determine the size and the length of leadership." -- John C. Maxwell


Daily Tech Digest - February 03, 2020

Why UK's Huawei decision leaves the fate of global 5G wireless in US hands

200130-5geo-07.jpg
"The UK has been doing business with Huawei for a long time through Openreach. They had been operating, with oversight, in the country for years," noted Doug Brake, who directs broadband and spectrum policy for Washington, DC-based Information Technology & Innovation Foundation. Openreach, to which Brake refers, is the division of top British telco BT responsible for deploying fiber optic infrastructure. It had been partnering mainly with Huawei until last November, when it began an evaluation process in search for additional partners. "So for the UK to come out and publicly brand them as a high-risk vendor, cordon them off to only 35 percent of the access network — not even let them into the core network," said Brake, "really puts Huawei in a tight box." For its part, Huawei did what it could Tuesday to thwart any possible interpretation of tightness, or a box. Omitting any mention of security or exploiting back doors in the infrastructure, Huawei Vice President Victor Zhang issued a statement, reading in part: "This evidence-based decision will result in a more advanced, more secure, and more cost-effective telecoms infrastructure that is fit for the future..."



Lex: An Optimizing Compiler for Regular Expressions

This perhaps isn't the fastest C# NFA regex engine around yet, but it does support Unicode and lazy expressions, and is getting faster due to the optimizing compiler. A Pike Virtual Machine is a technique for running regular expressions that relies on input programs to dictate how to match. The VM is an interpreter that runs the bytecode that executes the matching operation. The bytecode itself is compiled from one or more regular expressions. Basically, a Pike VM is a little cooperatively scheduled concurrent VM to run that bytecode code. It has some cleverness in it to avoid backtracking. It's potentially extremely powerful, and very extensible but this one is still a baby and very much a work in progress. The VM itself is solid, but the regular expressions could use a little bit of shoring up, and it could use some more regex features, like anchors.


Google launches open-source security key project, OpenSK


FIDO is a standard for secure online access via a browser that goes beyond passwords. There are three modern flavours of it: Universal Second Factor (U2F), Universal Authentication Factor (UAF), and FIDO2. UAF handles biometric authentication, while U2F lets people authenticate themselves using hardware keys that you can plug into a USB port or tap on a reader. That works as an extra layer on top of your regular password. FIDO2 does away with passwords altogether while using a hardware key by using an authentication protocol called WebAuthn. This uses the digital token on your security key to log straight into a compatible online service. To date, Yubikey and Google have both been popular providers of FIDO-compatible keys, but they’ve done so using their own proprietary hardware and software. Google hopes that by releasing an open-source version of FIDO firmware, it will accelerate broader adoption of the standard. Google has designed the OpenSK firmware to work on a Nordic dongle, which is a small uncased board with a USB connector on it.


Early use of AI for finance focused on operations, analytics


Anecdotal evidence suggests AI excels at financial processes that involve repetitive operations on large volumes of data. "It will eliminate the need for people to do a lot of the boring, repetitive work that they're doing today," Kugel said. "It will make it possible for systems to wrap themselves around the habits and requirements of the user, as opposed to the user having to adapt how they work within the limitations of technology." Data quality will also improve and, with it, the quality of analytics as AI gets better at flagging errors for people to correct, Kugel said. AI is also helping with tedious accounts payable tasks, such as confirming that goods were received and that an invoice contains the right items, Tay said. Companies that use automated payments are deploying machine learning to scan payment patterns for deviations. "If the machine learning algorithm tells them that the probability of the goods having been received and everything being good with that specific invoice, they'll pay that immediately," Tay said.


SaaS, PaaS, IaaS: The differences between each and how to pick the right one

Businessman using mobile smartphone and connecting cloud computing service with icon customer network connection. Cloud device online storage. Cloud technology internet networking concept.
In theory, PaaS, IaaS and SaaS are designed to do two things: cut costs and free organizations from the time and expense of purchasing equipment and hosting everything on-premises, DiDio said. "However, cloud computing services are not a panacea. Corporate enterprises can't just hand everything off to a third-party cloud provider and forget about them. There's too much at stake." Internal IT departments must remember what DiDio calls "the three "Cs: communication, collaboration and cooperation,'' which she said are all essential for successful business outcomes and uninterrupted smooth, efficient daily operational transactions. "When properly deployed and maintained, IaaS is highly flexible and scalable,'' DiDio said. "It's easily accessed by multiple users. And it's cost effective." IaaS is beneficial to businesses of all types and sizes, she said. "It provides complete and discretionary control over infrastructure… Many organizations find that they can slash their hardware costs by 50% or more using IaaS." However, IaaS "requires a mature operations model and rigorous security stacks including understanding cloud provider technologies,'' noted Vasudevan. IaaS also "requires skill and competency in resource management."


Startup uses machine learning to support GDPR’s right to be forgotten

“Every user has over 350 companies holding sensitive data on them, which is quite shocking,” says Ringel. “Not only that, but this number is growing by eight new companies a month, which means our personal footprint is highly dynamic and changing all the time.” According to Ringal, the conversation about data privacy needs to focus much more on data ownership. “Privacy is all about putting fences around us, preventing our personal information being shared with other people,” he says. “But the problem with that is that we miss out on the fun – every day we use online services and share our data with companies because it is convenient and efficient. Now, with GDPR, we can actually take our data back whenever we choose.” Once users know where their data is, Mine helps them reclaim it by submitting automated right-to-be-forgotten requests to the companies with the click of a button. For users on the trial version of Mine, the startup will email the request to the company and copy the user in to follow up communications.


Serverless Cloud Computing Will Drive Explosive Growth In AI-Based Innovation

Photo:
As cloud computing has advanced, more companies have made the transition to the cloud-based platform as a service model (PAAS), which delivers computing and software tools over the internet. PaaS can be scaled up or down as needed, which reduces up-front costs and allows you to focus on developing software applications instead of dealing with hardware oriented tasks. To support this shift toward the PaaS cloud, public cloud companies have begun heavily investing in building or acquiring serverless components that have pre-built unit functionality. These out-of-the-box tools allow organizations to test new concepts, iterate and evaluate without taking on high risk or expense. In the past, only large companies with considerable resources could afford to experiment with AI-based innovation. Now startups or small teams within larger enterprises have access to cloud-based, prepackaged algorithms offering different AI models that can fast-track innovation.  Let’s explore practical examples of how this trend helps democratize innovation in artificial intelligence by minimizing the time, money and resources needed to get started.


The Past, Present And Future Of Oracle’s Multi-Billion Dollar Cloud Bet

Larry had more confidence than I did. He was sure of it. I was more cautiously optimistic. We started running our little business on QuickBooks because we hadn’t built our system yet. When our system got to the point where we could run our own business’ business on it, I imported our QuickBooks file and saw our business in a browser at home. I was at home looking at all the key metrics of how we were spending, and how we were growing, and who our employees were, all there in the browser. That’s when I was sure it was going to work because I knew we were first to do that. I felt that with Larry’s strong backing we’d be able to reach a lot of companies, and that’s what happened. He was sure from the very beginning. It really was his idea to do it as a web-based application. He was the pioneer, and this was before Salesforce.com started, which he was also involved with. He wanted to do accounting, and I encouraged us to move beyond just accounting, and together we came up with this concept of the suite, and thus the name of the company, ultimately, became NetSuite.


Rogue IoT devices are putting your network at risk from hackers


Security standards for IoT devices aren't as stringent as they are for other products such as smartphones or laptops, so in many cases, it's been known for IoT manufacturers to ship highly insecure devices – and sometimes these products never receive any sort of patch either because the user isn't aware of how to apply it, or the company never issues one. A large number of connected devices are also easily discoverable with the aid of IoT search engine Shodan. Not only does this leave IoT products potentially vulnerable to being compromised and roped into a botnet, insecure IoT devices connected corporate networks could enable attackers to use something as trivial as a fitness tracker or a smart watch as an entry point into the network, and use it as means of further compromise. "Personal IoT devices are easily discoverable by cybercriminals, presenting a weak entry point into the network and posing a serious security risk to the organisation. Without a full view of the security policies of the devices connected to their network, IT teams are fighting a losing battle to keep the ever-expanding network perimeter safe," said Malcolm Murphy, Technical Director for EMEA at Infoblox.


Europe’s new API rules lay groundwork for regulating open banking


The EU and the U.K. have both passed laws that explicitly require their banks to create application programming interfaces and open those APIs to third-party developers. And banks in the U.S. should take notice. These new laws are paving the way to standardization for open banking which could lead to rapid innovation and a competitive advantage for the European banking system. These new laws are also more friendly to fintech companies as it streamlines access to a growing network of bank data. Fintechs within the U.S. must create individual data sharing agreements with each bank partner, and the negotiations for each partnership can be resource intensive. However, in the EU a fintech can get access to all bank APIs through registering as an account information service provider (AISP) or payment initiation service provider (PISP). This could create a situation where the U.S. may lose out on technology investments and see innovative financial professionals leave the nation to work in the rapidly advancing open-banking environment within the EU.



Quote for the day:


"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr