Daily Tech Digest - September 04, 2020

Blockchain for Master Data Management

What is the relevance of Blockchain for MDM? Blockchain is a type of a database – through quite different from traditional relational or emerging NoSQL databases. As highlighted in the podcast, Blockchain is a linked list of blocks that contain cryptographically secured blocks of transactions that are immutable. Participants who do not know or trust each other can rely on and trust the Blockchain. Unlike traditional databases that support CRUD (Create, Read, Update, and Delete), with Blockchain, you can only Create and Read: transactions are validated and added to the blocks in the chain. They can be read but never deleted or updated. All transactions and activities on the Blockchain are timestamped. So, what is the relevance of Blockchain for MDM when we cross organizational boundaries. Conducting business transactions across organizational boundaries has all the challenges of intra-enterprise silos and adds several others. Inter-Enterprise exchanges and data sharing are marred with multiple inefficiencies: manual forms and paperwork, error-prone replications, delays due to organizational or bureaucratic inefficiencies, errors in language translations, especially cross-country exchanges, difficulties, and challenges in reconciling governance policies – to name a few.


Everything you need to know about the weird future of quantum networks

QKD technology is in its very early stages. The "usual" way to create QKD at the moment consists of sending qubits in a one-directional way to the receiver, through optic-fibre cables; but those significantly limit the effectiveness of the protocol. Qubits can easily get lost or scattered in a fibre-optic cable, which means that quantum signals are very much error-prone, and struggle to travel long distances. Current experiments, in fact, are limited to a range of hundreds of kilometers. There is another solution, and it is the one that underpins the quantum internet: to leverage another property of quantum, called entanglement, to communicate between two devices. When two qubits interact and become entangled, they share particular properties that depend on each other. While the qubits are in an entangled state, any change to one particle in the pair will result in changes to the other, even if they are physically separated. The state of the first qubit, therefore, can be "read" by looking at the behavior of its entangled counterpart. That's right: even Albert Einstein called the whole thing "spooky action at a distance". And in the context of quantum communication, entanglement could in effect, teleport some information from one qubit to its entangled other half, without the need for a physical channel bridging the two during the transmission.


Cyber security Career Guidance — Part 1 — the Beginner’s Journey

Logs can seem overwhelming the first time you come across them. But all you must do is confront the bully head-on! In my training workshops, I always throw different log file formats on the screen and ask the students to analyze what’s going on. At first, there’s a typical sigh across the whole class, but soon people begin to interpret the different fields and what they could mean. There are numerous tools out there — some that support multiple log formats, others which do a great job at a specific log format. With experience, you will figure out which tool works best for which type of log format, but nothing beats being able to look at raw logs and not be intimidated. ... while it is not mandatory that you know a programming language, but it helps a lot. During the interview process, unless it is mentioned on your resume, I would not ask about your programming know-how. But from personal experience, I can vouch for the power of programming when solving real-world technical issues. Again, which language you know is not important. Even C is fine. Shell scripting is possibly even better. Python is awesome. In college, we were taught Basic and C. We taught ourselves C++ and Java on the side.


How Google Maps uses DeepMind’s AI tools to predict your arrival time

Google Maps is one of the company’s most widely-used products, and its ability to predict upcoming traffic jams makes it indispensable for many drivers. Each day, says Google, more than 1 billion kilometers of road are driven with the app’s help. But, as the search giant explains in a blog post today, its features have got more accurate thanks to machine learning tools from DeepMind, the London-based AI lab owned by Google’s parent company Alphabet. In the blog post, Google and DeepMind researchers explain how they take data from various sources and feed it into machine learning models to predict traffic flows. This data includes live traffic information collected anonymously from Android devices, historical traffic data, information like speed limits and construction sites from local governments, and also factors like the quality, size, and direction of any given road. So, in Google’s estimates, paved roads beat unpaved ones, while the algorithm will decide it’s sometimes faster to take a longer stretch of motorway than navigate multiple winding streets.


How to Build a Strong Beta Testers Community

Before you start, you should define your goal and target audience. Defining goals is the first task to complete. Here are a few relevant ones: test an idea and gather feedback to make sure you are solving the right problem; test the sketches to make sure you solve the problem right; and test an early version to get feedback and adjust the solution before the official launch. Don’t forget to describe how you understand that you have achieved your goal. For example, if you want to get feedback regarding your product, that’s great. But what if only one user provides their feedback? Does it mean that you have achieved your goal? Make sure you can measure the results so that you are able to achieve your goal. And as with any other goal, don’t forget to revise your goal during your beta program. You may want to adjust it as you go. How much time do you have to dedicate to the beta program? If you do everything manually, then you need to set a maximum number of participants. Think how many contacts (customers) can you serve during the beta. Your beta customers will ask questions, provide feedback, and log the bugs.


How to judge open-source projects

An easier way to determine an open-source program's quality is simply to look at the number and quality of its developers. Mike Volpi, a well-known venture capitalist and Index Ventures partner, said that since "software is never sold," it is adopted by the developers who appreciate the software more because they can see it and use it themselves rather than being subject to it based on executive decisions." Therefore, "open-source software permeates itself through the true experts," and . . . "the developers . . . vote with their feet." If the programmers are leaving, the maintainers aren't getting back on patch requests, and the code is growly moldy, it's time to bid that program good-bye. Or, if it's essential to you, take it over yourself.  You can also determine a project's health by how easy -- or not -- it makes it for others to participate in it. Ed Warnicke, a Cisco Distinguished Consulting Engineer, believes successful open-source communities lower the barriers to useful participation. He lists many barriers to participation, which are red flags. ... Another way of judging open-source projects is how many people actually use them.


Which cybersecurity failures cost companies the most and which defenses have the highest ROI?

SCRAM (Secure Cyber Risk Aggregation and Measurement) has, according to its creators, solved that longstanding cyber-security problem. “SCRAM mimics the traditional aggregation technique, but works exclusively on encrypted data that it cannot see. The system takes in encrypted data from the participants, runs a blind computation on it, and returns an encrypted result that must be unlocked by each participant separately before anyone can see the answer,” they explained. “The security of the system comes from the requirement that the keys from all the participants are needed in order to unlock any of the data. Participants guarantee their own security by agreeing to unlock only the result using their privately held key.” More technical details about the process and the platform, which consists of a central server, software clients, and a communication network to pass encrypted data between the clients and the server, can be found in this paper. ... The researchers recruited seven large companies that had a high level of security sophistication and a CISO to test out the platform, i.e., to contribute encrypted information about their network defenses and a list of all monetary losses from cyber attacks and their associated defensive failures over a two-year period.


Open Service Mesh: a Service Mesh Implementation from Microsoft

Microsoft has released (in alpha) the open service mesh (OSM), a service mesh implementation compliant with the SMI specification. OSM covers standard features of a service mesh like canary releases, secure communication, and application insights, similar to other service mesh implementations like Istio, Linkerd, Consul, or Kuma. Additionally, the OSM team is in the process of donating the project to the CNCF. OSM implements the service mesh interface (SMI), a set of standard and portable APIs to deploy a service mesh in Kubernetes. When users configure a service mesh through SMI specification, they don't need to be specific about which service implementation they're running in the cluster. Additionally, OSM comes with standard and basic service mesh features like canary releases, secure service communication, and application insights. In this alpha release, OSM comes with the ability to configure traffic shifting policies, secure communication within services through mTLS, grained access control policies, application metrics, external certificate managers, and inject the sidecar Envoy proxy automatically.


The Hidden Costs of Losing Security Talent

Ryan Corey, co-founder and CEO of online training site Cybrary, says companies also lose money on staffing when they don't chart a clear career path for their employees. "Every cyber professional has recruiters calling them all the time. That's just the way it is because there are not enough people to fill the available jobs," he says. "When people feel boxed in, they will leave. They have to know what the path is to the next level." Another issue: Companies don't handle diversity well, adds Ron Gula, a board member at Cybrary. "By diversity I mean diversity in employment backgrounds," he says. "Companies may want to hire a pen tester because they have security experience, but they should also be looking for people who have experience in accounting, a legal department, or other types of jobs." Finally, companies don't fund cyber departments well enough, either, Gula says. "Too often there's a lack of leadership, funding, and a vision for what the department could be," he says."Sometimes they outsource and have a bad experience and then move forward with a skeleton crew." CyberVista's Petrella says she works with companies on developing their recruiting and retention strategies, as well as how to upskill the people they recruit.


Businesses, policymakers ‘misaligned’ on what ethical AI really means

Policymakers rated “fairness and avoiding bias”, such as the misidentification of individuals, as the top priority for this application of the technology, followed by “privacy and data rights” and “transparency.” Among private firms, however, the number one concern was different. These companies identified “privacy and data rights” as their number one worry. While this is just one example, experts from EY have remarked that the substantial misalignment in points of view between the public and private sectors poses a huge risk to the business landscape, as a focused approach between the two in relation to ethical AI is absent. Policymakers and firms need to unite and collaborate in truly defining ethical AI and must work together to narrow the existing gap. EY global markets digital and business disruption leader, Gil Forer said, “As AI scales up in new applications, policymakers and companies must work together to mitigate new market and legal risks.” Forer continued: “Cross-collaboration will help these groups understand how emerging ethical principles will influence AI regulations and will aid policymakers in enacting decisions that are nuanced and realistic.”



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - September 03, 2020

What is an office for now?

Working from home does work for a lot of people; I’ve been working from home since way before it was cool. But it can be terrible — isolating and uncomfortable, with blurred boundaries that make it too easy to keep working well past “office hours” but equally too easy to drift away from your desk to load the dishwasher. One survey on working from home, conducted by the Institute for Employment Studies in the U.K. early in its lockdown, found that more than half of respondents reported new musculoskeletal complaints, including neck and back pain, while their diet and exercise suffered. Many of them said they slept less and worried more. ... Additionally, asking employees to turn their home into an office makes employers more responsible for what happens there, while simultaneously making it more difficult to assess worker well-being. “I’ve spent a lot of my time making sure that people are OK in a way that you can do very, very swiftly in the office,” Sam Bompas, director at Bompas & Parr, a London-based experience design studio with approximately 20 employees, told me. “In the same way that for children, school provides an important social security function, if there’s anything wrong in [employees’] personal life, the office can do that as well.”


Most IoT Hardware Dangerously Easy to Crack

One of the easiest methods is to gain access to UART, or Universal Asynchronous Receiver/Transmitter, a serial interface used for diagnostic reporting and debugging in all IoT products, among other things. An attacker can use the UART to gain root shell access to an IoT device and then download the firmware to learn its secrets and inspect for weaknesses. "UART is only supposed to be used by the manufacturer. When you get access to it, in most cases you get complete root access," Rogers said. Protecting access to UART, or at least configuring it against interactive access, should be a fairly straightforward task for manufacturers; however, most don't make the effort. "They simply allow you to have complete interactive shell. It is the easiest way to hack every piece of IoT hardware," Rogers noted. Several devices even have UART pin names labeled on the board so it is easy to find the interface. Multiple tools are available to help find them if they are not labeled. Another, only slightly more challenging, route to completely pwning an IoT device is via JTAG, a microcontroller-level interface that is used for multiple purposes including testing integrated circuits and programming flash memory. 


Principles for Microservice Design: Think IDEALS, Rather than SOLID

The goal of interface segregation for microservices is that each type of frontend sees the service contract that best suits its needs. For example: a mobile native app wants to call endpoints that respond with a short JSON representation of the data; the same system has a web application that uses the full JSON representation; there’s also an old desktop application that calls the same service and requires a full representation but in XML. Different clients may also use different protocols. For example, external clients want to use HTTP to call a gRPC service. Instead of trying to impose the same service contract (using canonical models) on all types of service clients, we "segregate the interface" so that each type of client sees the service interface that it needs. How do we do that? A prominent alternative is to use an API gateway. It can do message format transformation, message structure transformation, protocol bridging, message routing, and much more. A popular alternative is the Backend for Frontends (BFF) pattern. In this case, we have an API gateway for each type of client -- we commonly say we have a different BFF for each client, as illustrated in this figure.


Ethical and professional data science needed to avoid further algorithm controversies

Identifying weaknesses in the attempts to ensure objectivity, the BCS report also said there is a need for clarity around what information systems are intended to achieve at the individual level, and that this should be established “right at the start” of the process. For example, distributing grades based on the characteristics of different cohorts of students so they are statistically in line with previous years – which is what the Ofqual algorithm did – is different to ensuring each individual student is treated as fairly as possible, something which should have been discussed and understood by all stakeholders from the beginning, it said. In terms of accountability, BCS said: “It is essential to develop effective mechanisms for the joint governance of the design and development of information systems right at the start.” Although it refrained from apportioning blame, it added: “The current exam-grading situation should not be attributed to any single government department or office.” CEO of the RSS, Stian Westlake, however, told Sky News the results fiasco was “a predictable surprise” because of DfE’s demand that Ofqual reduce grade inflation.


Why you shouldn’t mistake AI for automation

AI and automation cannot be mistaken for the same thing—where there’s automation, there is no requirement that artificial intelligence is involved. Indeed, automation has been around for centuries, far longer than we’ve had computers: traditional milling used water wheels to automate manual processes that human labor would otherwise have been required for. Water spins the wheel, which turns the millstone—an automated process that’s decidedly unintelligent. Simple automation has been the cornerstone of many businesses for years. For example, a process of sending out invoices may be automated once inputs into spreadsheets have been confirmed by people in the accounts department. Automation means that machines are replicating human tasks. But AI demands that the machines are also replicating human thinking. This means programming that can reflect on its own procedures and make decisions outside the scope of its own programming. Ultimately, machine learning requires a machine to react dynamically to changing variables. This is a fundamentally different objective to automation, which is essentially about teaching machines to perform repetitive tasks with predictable inputs. For this reason, applying machine learning to any automated process may be a case of overengineering.


Convert PDFs to Audiobooks with Machine Learning

When you look at a research paper, it’s probably easy for you to gloss over the irrelevant bits just by noting the layout: titles are large and bolded; captions are small; body text is medium-sized and centered on the page. Using spatial information about the layout of the text on the page, we can train a machine learning model to do that, too. We show the model a bunch of examples of body text, header text, and so on, and hopefully it learns to recognize them. This is the approach that Kaz, the original author of this project, took when trying to turn textbooks into audiobooks. Earlier in this post, I mentioned that the Google Cloud Vision API returns not just text on the page, but also its layout. ... The book Kaz was converting was, obviously, in Japanese. For each chunk of text, he created a set of features to describe it: how many characters were in the chunk of text? How large was it, and where was it located on the page? What was the aspect ratio of the box enclosing the text (a narrow box, for example, might just be a side bar)? Notice there’s also a column named “label” in that spreadsheet above. That’s because, in order to train a machine learning model, we need a labeled training dataset from which the model can “learn.” 


Zero-trust framework ripe for modern security challenges

Adopting a zero-trust security model is not an overnight process. "Younger companies with advanced architectures and less legacy equipment have an advantage since they are already utilizing new technology and are up to speed on new technology," said Pete Lindstrom, vice president of security research with IDC's IT Executive Program. Legacy infrastructure is an obstacle companies face when trying to shift to a zero-trust approach. A common yet misguided course of action is to conduct a massive overhaul of security infrastructure. "Companies often make the mistake of trying to boil the ocean and go way too broad in scope," Cunningham said. "They should focus in on granular things they can achieve one at a time, like enabling multifactor authentication, remote access control and disabling file shares." Since zero-trust security is a hot buzzword, businesses should be wary in terms of how they evaluate potential vendors since many like to pitch their products as zero trust when they really aren't. "Rule No. 1: Companies should make sure the vendor is using zero trust [in its own network] so they are buying something from someone who understand their pains," Cunningham said.


.NET CLI Templates in Visual Studio

One of the values of using tools for development is the productivity they provide in helping start projects, bootstrapping dependencies, etc. One way that we’ve seen developers and companies deliver these bootstrapping efforts is via templates. Templates serve as a useful tool to start projects and add items to existing projects for .NET developers. Visual Studio has had templates for a long time and .NET Core’s command-line interface (CLI) has also had the ability to install templates and use them via `dotnet new` commands. However, if you were an author of a template and wanted to have it available in the CLI as well as Visual Studio you had to do extra work to enable the set of manifest files and installers to make them visible in both places. We’ve seen template authors navigate to ensuring one works better and that sometimes leaves the other without visibility. We wanted to change that. Starting in Visual Studio 16.8 Preview 2 we’ve enabled a preview feature that you can turn on that enables all templates that are installed via CLI to now show as options in Visual Studio as well. 


How to predict new consumer behaviour in the Covid-19 era

Keeping tabs on what consumers are buying is the easiest way to get your data – predicting which products will grow and which won’t is where the gold is. While some product changes will be obvious — it’s unsurprising that purchase of medical supplies and non-perishable foodstuffs has increased — a 652% rise in the purchase of bread machines suggests that we don’t quite have the skills of Paul Hollywood just yet. There is also insight to be had in observing the products which have decreased in popularity over lockdown. Camera sales reduced by 64% over the previous 4 months. As social events such as holidays, birthdays and weddings were cancelled, so was the need to bag a new ‘social accessory’ for the occasion. Think about how your product suite fits around these trends and whether these trends are short term reactions, or long term shifts in behaviour. Can you scale back on a certain line of products or diversify your range to meet a new product demand? A shift to working — and playing — from home has driven significant demand for new purchases. With 43% of adults now working from home, companies that can help transform our homes into multipurpose activity hubs are rising in popularity.


How to make complicated machine learning developer problems easier to solve

Many of the difficulties in building efficient AI companies happen when facing long-tailed distributions of data….It's becoming clear that long-tailed distributions are also extremely common in machine learning, reflecting the state of the real world and typical data collection practices…. Current ML techniques are not well equipped to handle [long-tail distributions of data]. Supervised learning models tend to perform well on common inputs (i.e. the head of the distribution) but struggle where examples are sparse (the tail). Since the tail often makes up the majority of all inputs, ML developers end up in a loop--seemingly infinite, at times--collecting new data and retraining to account for edge cases. And ignoring the tail can be equally painful, resulting in missed customer opportunities, poor economics, and/or frustrated users. Unfortunately, the answer isn't to throw more computational horsepower or data at the problem. The very problem of disparate data across diverse customer inputs contributes to diseconomies of scale, whereby it may cost 10X more (in terms of data, infrastructure, and more) to generate a 2X improvement.



Quote for the day:

“Our greatest glory is not in never failing, but in rising up every time we fail.” -- Ralph Waldo Emerson 

Daily Tech Digest - September 02, 2020

Building a viable IT budget for 2021 in a time of uncertainty: Seven critical steps

In 2021, IT budget spends will be diversified over a broader range of categories (digitalization, mobile computing, employee training, for example) than in 2020, when IT budgets were heavily invested in security and cloud services. Security and cloud services will still lead investment categories, but organizations have reached an inflection point and feel they have attained many of their initial goals in these areas. End users will continue to be engaged in technology decision making. However, there are indications that more organizations want to fully understand just how much they spend on IT across the company. From a budgetary standpoint, this has sparked a movement to consolidate more of the IT spend (and assets) under a single umbrella, with IT in charge. Also in 2021, CFOs and other technology budget decision-makers will expect more input from successful trials and proofs of concept before they agree to fund new technology. This is in response to the mixed performance of ROI formulas, and also to cost overruns, which have routinely occurred with cloud services. That's not all. Below are seven additional budget forecasts that IT budget planners should take into account before building a 2021 IT budget.


Improvements in native code interop in .NET 5.0

With .NET 5 scheduled to be released later this year, we thought it would be a good time to discuss some of the interop updates that went into the release and point out some items we are considering for the future. As we start thinking about what comes next, we are looking for developers and consumers of any interop solutions to discuss their experiences. We are looking for feedback about interop scenarios in general – not just those related to .NET. If you have worked in the interop space, we’d love to hear from you on our GitHub issue. Some items mentioned in this post are Windows-specific (COM and WinRT). In those cases, ‘the runtime’ refers only to CoreCLR. ... C# function pointers will be coming to C# 9.0, enabling the declaration of function pointers to both managed and unmanaged functions. The runtime had some work to support and complement the interop-related parts of the feature. ... C# function pointers provide a performant way to call native functions from C#. It makes sense for the runtime to provide a symmetrical solution for calling managed functions from native code. UnmanagedCallersOnlyAttribute indicates that a function will be called only from native code, allowing the runtime to reduce the cost of calling the managed function.


Ducati Motors to leverage IT transformation from Aruba and Lenovo

“Using the latest and most advanced technologies is part of Ducati’s DNA,” said Konstantin Kostenarov, chief technology officer at Ducati. “Relying on the best technologies made available through our partners has significantly contributed to the overall improvement of processes, while at the same time increasing the value of the results achieved. “The choices made two years ago and the projects that have been carried out since then have allowed us to tackle the various complexities of this sport in the most effective way possible.” Giorgio Girelli, general manager of Aruba Enterprise, commented: “Among the technologies that have emerged as a result of Covid-19, the cloud is undoubtedly one that has proven its worth and made it possible to better face crisis situations. “An internal commissioned survey reveals that 59% of those who were able to use cloud solutions during emergency situations considered its use to be fundamental to their operations. “The sharing and combination of the latest technologies between the three companies involved has given life to a very innovative project focused on one goal: obtaining maximum performance.”


Leveraging AI to Deliver a Personalized Experience in the New Normal

It is key to understand how different subscribers perceive different experiences while gaming, attending a smart venue or traveling virtually. Each of these experiences will vary for different individuals: e.g. a man in his 30s who works from home versus a teenager who moves around the city. These experiences need to be predicted across various touch points, such as OTT game apps or smart venues, the network, call center, retail, and billing. It is also crucial to proactively identify anomalies and factors contributing to a negative experience or positive experience in order to act fast to resolve issues before they impact gaming customers, or to target the right customers at the optimal time for an add-on purchase in a smart venue. The application of AI and ML brings intelligent insights that are more precise than those produced by existing processes and systems, and enables the CSP to predict changes or anomalies in their customers’ experiences. AI and ML enable the possibility to look at each subscriber based on their individual profile, including demographics, device used or mobility to predict the experience more accurately, and taking into account the individual sensitivities, biases and expectations. The insights software learns with changing dynamics either at the CSPs network, customer segment or market and adapts predictions accordingly.


To build responsibly, tech needs to do more than just hire chief ethics officers

Just like the early days of digital, ethics can seem complex and remote. Remember thinking, “The internet will never be big enough to disrupt my industry? It can be tempting to assume you need a Ph.D. to debate complex topics like algorithmic bias or exclusion, especially as many of those chief ethics officers have those deep credentials and expertise. Even though tech fancies itself as an industry that welcomes new types of talent and thinking, credentialism is more part of the industry culture than we think – or admit. (If you’re questioning that, just think about how popular it is to put ex-employers in your Twitter biography.) Unless you work on ethics full time or you’re a product VP, it’s easy to feel that you have no say or no role in your company’s commitment to social responsibility, especially if you’re underrepresented at your company or speaking up puts you at risk. Ethical leaders play a powerful central role in coordinating, setting standards and creating incentives, but they wouldn’t want to be the only ones to own this work, either. Responsibility’s a muscle we build and practice. Doing the right thing isn’t a one-off action, but a commitment to values that inform day-to-day behaviors and decisions. So we need to create structures that ensure company values are embedded in roles across the board.


What Is Resilience Engineering?

Resilience engineering today isn’t thought of as a function. However, just as DevOps was a description of culture before it was a role and site reliability was an extension of operations before it was a focus, I wouldn’t be surprised if resilience engineering became a function in the new future. The first question most will ask however is, “Isn’t this just SRE?” The purpose of the term is to change the focus from simply reacting to incidents to developing long-term response strategies for them. Because the expectation in these environments is that things will break, resilience is the responsibility of existing DevOps and cloud operations teams. When applications and services do break, a “fly by the seat of your pants” response strategy will not work. Resilience engineering, while rooted in engineering practices, is largely focused on building strategies and a framework for their execution. This leaves the process of building resilience into a largely unestablished system in part because each system is unique. And, how you respond to issues in that system will likely be unique, even if the management plane that reports issues is not. ... For most, the best part of resilience engineering is taking what is learned from previous incidents and finding ways to automate future resolution.


Sustainability Through a Better 5G

Ericsson talks about ‘breaking the energy curve’ by providing products and solutions that simply use less energy and are the practical choice for companies striving to make a sustainable shift in their digital transformation journey. Swapping old radio equipment with 5G-ready Ericsson Radio System equipment nationwide enables service providers to serve 5G use cases with a single software upgrade and can also save them up to 30 percent on their energy consumption . For some operators these savings equate to paying back the investment made on modernization within just three years – who says sustainability does not go together with business goals? Looking to the future of work and travel post-coronavirus, it’s clear that our global mindset has shifted and that we can’t just go back to the way things were before. It’s all about connectivity, especially during these challenging times where keeping in touch with loved ones, essential services and businesses is more important than ever. The next era will witness technology not only serving our needs to stay connected but also enabling a more inclusive and sustainable world. With a focus on real-time data built upon a framework of sustainability, Ericsson have successfully architected a 5G-aware traffic management solution with AI embedded in its RAN Compute software.


Working from home: The 12 new rules for getting it right

Remote working doesn't change some elements of corporate professionalism. "Don't expect that colleagues, clients, and managers should always be easygoing in terms of dress code, tone of voice and punctuality in the remote workplace," Herman Tse, professor in the department of management at Monash Business School, tells ZDNet. And although there is a screen now separating you from your colleagues, don't take this as an opportunity to prudently check emails or scroll Twitter during a video call, because others can tell when you are multi-tasking, even if virtually. You wouldn't check your phone in front of a co-worker giving an in-person presentation – so there is no reason to act differently online. With 30-minute slots being the default option when setting up a calendar meeting, calls that could take a couple of minutes now last for much longer than necessary. "There is work that needs to be done around calendar norms," Sowmyanarayan adds. "Things that take two minutes should take two minutes." Before setting up a day full of half-hour meetings, therefore, remember how long those chats would have taken in an office. More often than not, you will find that a shorter call is far more appropriate.


App Trimming in .NET 5

Trimming sounds great, but as with most good things, there is a catch. The trimming does a static analysis of the code and therefore can only identify types and members when they are referenced from code. However .NET offers a great deal of dynamism, typically depending on reflection. For example, Dependency Injection in ASP.NET Core uses reflection to select appropriate constructors. This is largely transparent to the static analysis, so it needs to either be told about the required types or be able to detect common dynamism patterns – otherwise it will trim away code that is needed by the application which will result in runtime crashes. ... .NET 5 can take it two levels further and remove types and members that are not used. This can have a big effect where only a small subset of an assembly is used – for example, the console application above. Member-level trimming has more risk than assembly level trimming, and so is being released as an experimental feature, that is not yet ready for mainstream adoption. With assembly level trimming, its more obvious when a required assembly is missing, with member level trimming you need to have exhaustive testing of the app to ensure that nothing has been trimmed that could be required.


Q&A: CTO tips on delivering cloud innovation to avoid disruption

Make sure to develop and leverage an internal requirements matrix of what you are looking for. Be very clear about what you want and need from a particular cloud solution. Stack rank key priorities and progressively implement towards the long-term vision. Ask any vendor: How are things audited? Do they comply with privacy regulations such as GDPR? What technical support do they offer? Get a full picture of what the commitment is by the vendor. Deployments that are measured in quarters are too slow, companies need to think about how they can take advantage of the speed and control of cloud deployments and use an agile approach to incrementally transform. An important element to consider is the vendor’s application user rate and the holistic usability of any cloud applications. One of the most important things is usability and adaptability. Will this be easily adaptable to fit your company’s needs? Look at their roadmap and past innovations to get a better sense of their ability to push on innovation and support the ever-changing needs of various businesses. This will give you a better sense of their ability to adapt to the changing needs of your company. Start a dialogue with vendors about how you need to demonstrate results quickly.



Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer

Daily Tech Digest - September 01, 2020

UK government unveils next steps in digital identity plans

The Digital Identity Strategy Board’s six principles: Privacy – When personal data is accessed, people will have confidence that there are measures in place to ensure their confidentiality and privacy; for instance, a supermarket checking a shopper’s age, a lawyer overseeing the sale of a house, or someone applying to take out a loan; Transparency – When an individual’s identity data is accessed when using digital identity products, they must be able to understand by who, why and when; for example, being able to see how your bank uses your data through digital identity solutions; Inclusivity – People who want or need a digital identity should be able to obtain one; Interoperability – Setting technical and operating standards for use across the UK’s economy to enable international and domestic interoperability; Proportionality – User needs and other considerations, such as privacy and security, will be balanced so that digital identity can be used with confidence across the economy; and Good governance – Digital identity standards will be linked to government policy and law. Any future regulation will be clear, coherent and align with the government’s wider strategic approach to digital regulation.


Iranian Hackers Using LinkedIn, WhatsApp to Target Victims

By personalizing the campaign and using these social media platforms, the attackers attempt to gain the victims' trust and coax them into opening the malicious links embedded in follow-up emails, according to the report. Charming Kitten, also known as APT35, Phosphorous and Ajax, is one of Iran's top state-sponsored hacking groups. While the group's tactic of impersonating journalists is not new, ClearSky researchers say the latest campaigns are the first time the threat actors used mediums other than email or SMS to target their victims. "This is the first time we identified an attack by Charming Kitten conducted through WhatsApp and LinkedIn, including attempts to conduct phone calls between the victim and the Iranian hackers," the researchers note in the report. "These two platforms enable the attacker to reach the victim easily, spending minimum time in creating the fictitious social media profile. However, in this campaign, Charming Kitten has used a reliable, well-developed LinkedIn account to support their email spear-phishing attacks." ... Charming Kitten has been targeting journalists and activists since at least 2013.


Dealing with sovereign data in the cloud

Data sovereignty is more of a legal issue than a technical one. The idea is that data is subject to the laws of the nation where it’s collected and exists. Laws vary from country to country, but the most common governance you’ll see is not allowing some types of data to leave the country at any time. Other regulations enforce encryption and how the data is handled and by whom. These were pretty easy rules to follow when we had dedicated data centers in each country, but the use of public clouds that have regions and points-of-presence all over the world complicates things. Misconfigurations, lack of understanding, and just general screw-ups lead to fines, impacts to reputations, and, in some cases, disallowing the use of cloud computing altogether.  Some best practices are emerging to deal with data sovereignty in the cloud. Data governance systems are worth their weight in gold. When dealing with regulations that are bound to data, these systems will keep you out of trouble since they won’t allow humans to violate data policies that are set to reflect the law of the land where the data resides. Training is another critical point. Most of the data sovereignty issues can be traced to human error. Everyone handing the data should be knowledgeable on the regulations. Many countries mandate this.


How IoT is helping cities become more sustainable than ever before

Sensor-enabled devices have been helping to monitor the environmental impact of cities for some time, collecting details about sewers, air quality, and garbage. Recently, air pollution has been a big pain point in cities, such as London, Paris and Rome, where it is regularly cited as one of the most serious environment problem which could affect health today. To address this, many are turning to Air Quality Eggs (AQEs), which are open-source IoT platforms for air pollution. In simple terms, this is an open system that collates citizen-contributed data on air quality. ... Connected technologies are also helping to increase awareness and visibility into individual energy and resource usage. Smart energy meters provide city livers with transparent data on their own energy consumption, which has been shown to reduce consumption across the board. Today, connected smart thermostats can also be used to integrate with heating systems so that clear cut decisions can be made on when to turn the heating on based on fluctuating energy costs. Moreover, smart IoT water management sensors can be combined with data analytics programmes to provide consumers with increased visibility into the amount of water they use.


Overcoming the challenges of machine learning at scale

As with any emerging technology, another challenge is ensuring a positive return on investment with respect to business objectives. Success requires adjustments to both process and culture. “Organizations that are serious about scaling machine learning and bringing more models from the lab to production are investing in the processes, tools, and skills to support model management and operations,” said Isaac Sacolick, President of StarCIO and author of Driving Digital. “Organizations should start with high-value and easy-to-execute experiments, but then must recognize that scaling requires an investment in an end-to-end machine learning lifecycle.” Tim Crawford , CIO Strategic Advisor with AVOA, also emphasized the importance of process and culture. “First step, create a methodology and culture that supports ML and prioritizes how to engage ML,” he said. “Identifying the right projects, prioritizing, ensuring that you have enough good data and creating a culture that embraces ML across the enterprise.” A lack of alignment between ML projects and the business can hobble efforts to scale the technology, said Will Kelly, a technical writer.


Remote Work Has Law Firm Cybersecurity in a Fragile State

For even the most vigilant staff, homes are never going to be quite like offices. It’s too easy for someone to overhear sensitive information, and too much to expect that no one will ever use a personal email, chat tool or social media account to offer something that resembles legal advice. There are so many variables that can no longer be controlled. One firm has gone so far as to insist its lawyers switch off any smart device when on calls to certain clients lest an app listen in. Other firms have decided that certain apps should be banned altogether. Ropes & Gray banned its lawyers and staff from having social media app TikTok on devices that also receive work emails following privacy concerns from clients. And these are just the threats that have been discovered. Research by cybersecurity firm Tessian found that data loss incidents happen way more often than IT directors think. No wonder such people are constantly telling workers to take this stuff more seriously. Unfortunately, it is probably fair to say that there is only one thing that will really make people pay proper attention to their home working habits. And that is a major data breach hitting the headlines.


Is Covid-19 a Mental Health Tipping Point?

As more people remain at home in fear of COVID-19, it’s clear that the future of care is becoming increasingly digital. Even private insurers are stepping up, with most expanding their telehealth coverage, sometimes with no co-pay. This has been a windfall for digital behavioral health startups. Venture funding for this technology has reached unprecedented levels, with a record $588M raised during the first half of 2020 spurred by the pandemic. It’s clear that things will never be the same…and, in some ways, that’s a good thing. This shift has forced many companies to have difficult discussions about staff mental health and wellbeing that had previously been avoided. This new openness is helping employees feel more comfortable in acknowledging how they’re feeling – making it okay not to feel “okay.” This makes the role of managers more complicated and, more impactful than ever before. Yet, some may feel reticent to share their own feelings and/or be unable to manage what can easily become an emotionally charged discussion. And, at the same time, they may be suffering too. It is essential that companies ensure they have the training and support they need to, in turn, support their teams.


Underbanked households would benefit from a regulated blockchain

To be clear, distributed ledger technology is not a panacea, but its core attributes reinforce and strengthen essential controls required by regulators. First, the immutability of the ledger prevents participants within a network from changing or tampering with transactions once it has been recorded. Second, since the technology is decentralized, it provides greater transparency and decreases risk of important information being concentrated within one group or organization. Third, the encrypted nature of blockchain strengthens data privacy and security while enabling secure data-sharing between counterparties, including with regulators and law enforcement when necessary. Many financial institutions remain reluctant to incorporate blockchain tools into their payments or compliance operations. Skepticism from industry, regulators and policymakers has further dampened interest. Yet, essential financial products and services are increasingly being facilitated outside of the traditional banking system, often at a faster pace. Many of these new tools are accessible across borders, beyond a particular regulatory jurisdiction.


Cisco: Making remote users feel at home on the new enterprise network

“The fundamental shift is that we need to think about our people working from home, and the home networks they use, as the default network. What we want is to create a high-quality micro-branch office in your home,” said Greg Dorai, vice president of product management and strategy for Cisco’s Enterprise Infrastructure and Solutions Group. “Now we must consider every work-from-home worker and every one of their home offices as worthy of the same level of connectivity support as our company headquarters and branches.” Realistically every company cannot provide every worker with headquarters-level support for their home networks, but there are technologies available and coming in the near future that can address the different needs of different workers, Dorai said. In Cisco’s case a couple of new offerings address wireless and wide area networking connectivity for remote users. “For employees for whom best-effort connectivity isn’t enough, we can replace or augment their home-networking access point with a Wi-Fi router that acts as an extension of the corporate network,” Dorai said. “Home wireless access points, configured by company IT before the employee installs them, can provide advanced security and monitoring and prioritize bandwidth for applications that need it.”


Interview with RavenDB Founder Oren Eini

RavenDB works with JSON documents, so using JavaScript is a very natural way to work with the database. There are a few ways that you can work with JavaScript in RavenDB. RavenDB has a JS interpreter built-in (supporting ECMAScript 5.1 and large parts of 6) which can be used in queries and in patch operations. That gives you a lot of freedom to express what you want and apply logic on the database server. ... There are a few things that are on our roadmap that I am really looking forward to. For example, in RavenDB 5.1 we are going to come with replication support in Byzantine networks. This is useful when you have RavenDB nodes deployed in an environment where you don’t trust the remote nodes. A good example is when you need to integrate with a RavenDB instance that is running on a user’s machine, and you want to allow that user’s RavenDB instance access to some of the data in the cloud. That allows you to build systems that use RavenDB and collaborate, without needing to trust the remote locations. And conversely, the remote location doesn’t need to trust you. This will allow RavenDB to take on itself the role of synchronization between these locations.



Quote for the same:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - August 31, 2020

How the DevOps model will build the new remote workforce

Most importantly, the humans managing systems ultimately determined the company's capacity to adapt to the pandemic. "We recognized that … the systems may need to scale, and we may need to make changes to meet a new global demand, [but] that is much different than how our peers, the people we care about and work with, are going to be impacted by this," Heckman said. Thus, the SRE team's role was not just to watch systems and shore up their reliability, but also to manage communications with other employees, Heckman said, "not only so they had the current context of what we were thinking, suggesting and where we were headed, but also to give them some confidence that the system around them would be fine." Similar principles must be applied to manage the human impact of a longer-term shift to remote work, said Jaime Woo, co-founder of Incident Labs, an SRE training and consulting firm, in a separate presentation at the SRE from Home event. "The answer is not 'just be stronger,'" for humans during such a transition, any more than it is for individual components in a distributed computing system under unusual traffic load, Woo said.


From Defense to Offense: Giving CISOs Their Due

CISOs are now in a position where they must — somehow — reinvent how they work and how they are perceived within their organizations. Historically, they have been the company's risk-averse first line of defense against cyberattacks, and have been viewed as such. But this state of affairs needs to evolve. "CISOs cannot afford to be seen as blockers of innovation; they must be problem-solvers," says Kris Lovejoy, EY Global Advisory Cybersecurity Leader, in EY's report. "The way we've organized cybersecurity is as a backward-looking function, when it is capable of being a forward-looking, value-added function. When cybersecurity speaks the language of business, it takes that critical first step of both hearing and being understood. It starts to demonstrate value because it can directly tie business drivers to what cybersecurity is doing to enable them, justifying its spend and effectiveness." But do current CISOs have the right skills and experience to work in this new way and serve in a more proactive and forward-thinking role? That's an open question, and the answer will probably demand a new breed of CISO whose job is not driven mainly by threat abatement and compliance.



Want an IT job? Look outside the tech industry

It's always been true that most software was written for use, not sale. Companies might buy its ERP software from SAP and office productivity software from Microsoft, but they were writing all sorts of software to manage their supply chain, take care of employees, and more. What wasn't true then, but is definitely true now, is just how much of that software spend is now focused on company-defining initiatives, rather than back-office software meant to keep the lights on. Small wonder, then, that in the past year companies have posted nearly one million jobs in the US, according to the Burning Glass data, which scours job postings. That number is expected to increase by more than 30% over the next few years, with non-tech IT jobs set to boom at a 50% faster clip than IT jobs within tech. As for who is hiring, though tech companies top the list (arguably one of them isn't really a tech company), the rest of the top 10 are decidedly non-tech. Digging into the Burning Glass report, and moving beyond software developer jobs, specifically, and into the broader category of IT, generally, Professional Services, Manufacturing, and Financial Services account for roughly half of all IT openings outside tech.


Data protection critical to keeping customers coming back for more

Despite the growing advancements on the data protection front, 51 percent of consumers surveyed said they are still not comfortable sharing their personal information. One-third of respondents said they are most concerned about it being stolen in a breach, with another 26 percent worried about it being shared with a third party. In the midst of the growing pandemic, COVID-19 tracking, tracing, containment and research depends on citizens opting in to share their personal data. However, the research shows that consumers are not interested in sharing their information. When specifically asked about sharing healthcare data, only 27 percent would share health data for healthcare advancements and research. Another 21 percent of consumers surveyed would share health data for contact tracing purposes. As data becomes more valuable to combat the pandemic, companies must provide consumers with more background and reasoning as to why they’re collecting data – and how they plan to protect it. ... As the debate grows louder across the nation, 73 percent of consumers think that there should be more government oversight at the federal and/or state/local levels.


The power of open source during a pandemic

The world needs to shift the way it's approaching problems and continue locating solutions the open source way. Individually, this might mean becoming connection-oriented problem-solvers. We need people able to think communally, communicate asynchronously, and consider innovation iteratively. We're seeing that organizations would need to consider technologists less as tradespeople who build systems and more as experts in global collaboration, people who can make future-proof decisions on everything from data structures to personnel processes. Now is the time to start building new paradigms for global collaboration and find unifying solutions to our shared problems, and one key element to doing this successfully is our ability to work together across sectors. A global pandemic needs the public sector, the private sector, and the nonprofit world to collaborate, each bringing its own expertise to a common, shared platform. ... The private sector plays a key role in building this method of collaboration by building a robust social innovation strategy that aligns with the collective problems affecting us all. This pandemic is a great example of collective global issues affecting every business around the world, and this is the reason why shared platforms and effective collaboration will be key moving forward.


Why Digital Transformation Always Needs To Start With Customers First

A fascinating point regarding Deloitte Insights’ research is the correlation it uncovered between an organization’s digital transformation maturity and the benefits they gain in efficiency, revenue growth, product/service quality, customer satisfaction and employee engagement. They found a hierarchy of pivots successful enterprises make to keep pursuing more agile, adaptive organizational structures combined with business model adaptability, all driven by customer-driven innovation. The most digitally mature organizations can adopt new frameworks that prioritize market responsiveness, customer-centricity and have analytics and data-driven culture with actionable insights embedded in their DNA. The two highest-payoff areas for accelerating digital maturity and achieving its many benefits are mastering data and creating more intelligent workflows. Deloitte Insights’ research team looked at the seven most effective digital pivots enterprises can make to become more digitally mature. The pivots that paid off the best as measured by revenue, margin, customer satisfaction, product/service quality and employee engagement combined data mastery and improving intelligent workflows.


A searingly honest tech CEO tells the truth about working from home

Morris believes that everyone has what she calls their "Covid Acceptance Curve." But no two employees' curves are likely to be alike. "Many possible solutions for one employee are actually counter-indicated for others," she says. "Think of your team as an overlapping series of waves, each strand representing a person and their curve. You could try to slot in a single solution across 'strands,' but it will inevitably miss so many marks, reaching people too late, too early or with something that isn't even relevant to them." Some might imagine that one of the particular failings of tech leadership is the temptation to treat all employees with one broad free lunch. There, that should please everyone. Now, says Morris of her employees: "Some are experimenting with how to juggle work and homeschooling, some are struggling with crippling isolation, some have been impacted by Covid personally, others are facing anxiety of so many kinds." I wonder whether it was always this way, but leaders didn't care so much. Each employee has always been burdened with their own practical and emotional issues not directly related to work. Now, though, it's the physical distance and the constant, lonely staring at screens that intensifies difficulties -- and leadership's ability to anticipate or even understand them.


How does open source thrive in a cloud world? "Incredible amounts of trust," says a Grafana VC

Cloud gives enterprises a "get-out-of-the-burden-of-maintaining-open-source free" card, but savvy engineering teams still want open source so as to "not lock themselves in and to not create a bunch of technical debt." How does open source help to alleviate lock-in? Engineering teams can build "a very modular system so that they can swap in and out components as technology improves," something that is "very hard to do with the turnkey cloud service." That's the technical side of open source, but there's more to it than that, Gupta noted. Referring to how Elastic ate away at Splunk's installed base, Gupta said, "The biggest reason...is there is a deep amount of developer love and appreciation and almost like an addiction to the [open source] product." This developer love is deeper than just liking to use a given technology: "You develop [it] by being able to feel it and understand the open source technology and be part of a community." Is it impossible to achieve this community love with a proprietary product? No, but "It's a lot easier to build if you're open source." He went on, "When you're a black box cloud service and you have an API, that's great. People like Twilio, but do they love it?"


Is Low-Code Or No-Code Development Suitable For Your Startup App Idea?

Speed and adaptability are key ingredients in every product development phase of a startup. Assume it will take you four months to create and launch the first version of your product. You spoke with potential customers, gathered, and implemented their feedback to create the best solution you could build based on the information you have. If those potential customers need your solution, they will be looking forward to it. And if they committed financially, they’re going to be even more eager to use it. The truth is that in a competitive market where buyers have many options, eagerness and patience are two different things. The customers may wish to use your product sooner than later but they will not wait for it. Even if they don’t have better options today, they will figure out an alternative solution. Now assume you launched your product, served the first customers and gathered some more critical feedback. Your customers will not wait months for those changes, no matter how important your product is for them. Speed and adaptability can make or break a startup.


Tackle.io's Experience With Monitoring Tools That Support Serverless

Tackle runs microservices such as managed containers on AWS Fargate, deploys its front end on Amazon CloudFront, and uses Amazon DynamoDB for its database, Wood says. “We’ve spent a lot of time making sure that our architecture is something scalable and allows us to provide value to our customers without interruption,” he says. Tackle’s clientele includes software and SaaS companies such as GitHub, PagerDuty, New Relic, and HashiCorp. Despite the benefits, Woods says running serverless can introduce such issues as trying to find obscure failures with APIs. “Once you adopt serverless, you’ll have a chain of Lambda functions calling each other,” he says. “You know that somewhere in that process was an error. Tracing it is really difficult with the tools provided out of the box.” Before adopting Sentry, Tackle spent a lot of engineering hours trying to discover the root cause of problems, Woods says, such as why a notification was not sent to a customer. “It might take half a day to get an answer on that.” Tackle adopted Sentry’s technology initially to get back traces on such errors. Woods says his company soon discovered Sentry also sends alerts for failures Tackle was not aware of in its web app.



Quote for the day:

"You can't lead anyone else further than you have gone yourself." -- Gene Mauch