Daily Tech Digest - September 02, 2018

Strategies for Improving Smart City Logistics

Strategies for Improving Smart City LogisticsEfficient, timely and accurate delivery is a necessity to retailers and logistics providers survival in an Amazon Prime world. Smart Cities goals of livability and sustainability means they want less trucks, congestion and pollution. For all stakeholders to achieve their goals, the only answer is to work together. If cities, retailers, and logistics providers work together, collaboration and digital solutions can help resolve traditional challenges of last-mile logistics and improve the livability and sustainability of cities. ... In Europe, where they have higher urbanization, more aggressive goals for CO2 reduction, and the width of the streets in its older cities are less equipped to handle a rise in urban freight transport, there have been many initiatives and cities working on this issue. The European Union has been co-funding and working together more collaboratively with cities and partners such as logistics companies like TNT and DHL, as well as, local retailers in the creation of consolidation centers and more sophisticated delivery practices.


Bank Products Are Dead: Long Live Experiences


By 2020 we’re going to see 50 billion new devices connected to the Internet — everything will be smart. Smart Fridges that order your groceries or can tell you what you can cook with the remaining items inside, sensors you wear on your wrist or in your clothes that monitor your health and activity, cars that will talk to each other and drive themselves, smart mirrors that will show you how you look in that new shirt, robot drones and pods that will deliver you groceries or Amazon order — the world will be filled with smart stuff. We live in a world where new technology emerges and is adopted in months today, versus the years it took previously. It’s all moving so quickly. As more and more technology is injected into our lives, we become acclimatized and just accept the increased role technology has to play. This is known as technology, adoption diffusion. As we move to this technology-optimized world, we’ll start to redesign where and how humans fit in society. Banking will be embedded in our life.


This mind-reading AI can see what you're thinking - and draw a picture of it

Chilean software engineer Jorge Alviarez, one of the creators of Lifeware's program called LifewareIntegra that allows handicapped people to use computers, places head sensors on Jenifer Astorga (26), who suffers from quadriplegia, during a training session for her in Valparaiso city, about 75 miles (121 km) northwest of Santiago, January 18, 2011. Jenifer is the first to use the LifewareIntegra system developed by a group of computer science students at the Federico Santa Maria Technical University that permits quadriplegics to use a computer through brain activity picked up by sensors on the head device. REUTERS/Eliseo Fernandez (CHILE - Tags: SCI TECH EDUCATION SOCIETY)
While headlines around the world have screamed out that AI can now read minds, the reality seems to be more prosaic. Computers are not yet able to anticipate what we think, feel or desire. As science writer Anjana Ahuja remarked in the Financial Times, rather than telepathy, “a more accurate, though less catchy, description would be a ‘reconstruction of visual field’ algorithm”. Most of the research so far has been aimed at deciphering images of what subjects are looking at or, in limited circumstances, what they are thinking about. Studies have previously focused on programs producing images based on shapes or letters they had been taught to recognize when viewed through subjects’ minds. However, in one recent piece of research, from Japan’s ATR Computational Neuroscience Laboratories and Kyoto University, scientists said that not only was a program able to decipher images it had been trained to recognize when people looked at them but: “our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars.”


Microsoft officially christens 'Redstone 5' as the Windows 10 October 2018 Update

windows10october2018update.jpg
The October 2018 Update rollout will likely be staggered, as in past feature releases, with machines known to be able to best handle the new bits getting them pushed to them first. Microsoft also will likely begin rolling out the server complements to the October 2018 Update -- Windows Server 1809 and Windows Server 2019 -- on the same day in October as the client build goes live. The part of today's announcement that is a bit more surprising is that Microsoft is still saying that the October 2018 Update will be going to the "nearly 700 million devices" running Windows 10. Microsoft has been using this same 700 million figure since March 2018 and hasn't provided an updated momentum figure. ... The Windows 10 October 2018 Update will include the Cloud Clipboard, dark-mode File Explorer option, a number of new Notepad features and other tweaks and updates. It also will deliver a number of new security and enterprise features, as well as a new Windows 10 Enterprise Remote Sessions edition. Microsoft will likely detail these enterprise features at its Ignite show.


Want To Survive & Thrive With AI?…Then Mind The Skills Gap

“The battle for diversity is vital, just from the perspective of finding the best talent in the widest possible pool. Demystifying the idea that AI is something very difficult is crucial, you do not need to code like Sergey Brin, the co-founder of Google. Being unafraid of a strange discipline is key. There is a huge gap between STEM and the arts and we need each other,” says Dr Lauterbach. ... “The phrase Artificial Intelligence is misleading because everything happens by human design. Human beings pick big data sets, algorithms, methodology and processing hardware.” According to Dr Lauterbach, if algorithms are not created to be inclusive, they could contribute to inequalities and thus would not be effective in helping the world. “AI has a capability to scale everything we are about as humans,” she says. “So if you have a team of only white male developers or only Chinese male developers, then you will get a data set or some algorithms that are wired according to the preferences, habits and thinking processes of those groups.”


The Modern Marketing Model for the Financial Industry


When we consider the new complexities of modern financial services marketing, it is best to integrate both traditional and digital marketing in a manner that achieves synergistic benefits. By fusing together both classical and digital marketing, organizations are in a better position to identify capability gaps placing a focus on where and how to move forward. The chart below from eConsultancy helps to visualize the required components. This model is a natural progression from previous models used by marketers. For instance, in the 1960s, the prevalent marketing model was the ‘4Ps’ (Product, Price, Place and Promotion). In the 1980s, there were three additional Ps added (People, Process and Physical) reflecting increased customer interaction and the beginning of targeting. In the 1990s, ROI entered the equation, as did the ongoing increase in importance of targeting (the ‘4Cs’ included Consumer, Cost, Communication and Convenience). The new marketing model highlights the importance of customer insight, analytics, brand and customer experience.


7 factors that will push implementation of AI in healthcare


Because artificial neural networks of deep learning mirror the brain’s ability to learn difficult patterns, Hinton noted that the networks also model complicated between inputs and outputs used for predicting future medical events from past events or large data sets.  “As data sets get bigger and computers become more powerful, the results achieved by deep learning will get better, even with no improvement in the basic learning techniques, although these techniques are being improved,” Hinton wrote. A remaining challenge artificial intelligence has yet to overcome, Hinton wrote, is detecting patterns in unlabeled data in the process called “unsupervised learning."  “As new unsupervised learning algorithms are discovered, the data efficiency of deep learning will be greatly augmented in the years ahead, and its potential applications in healthcare and other fields will increase rapidly,” according to Hinton.  Overall, clinicians and physicians should be aware of the challenges that come with implementing AI and deep learning into everyday workflow and know how to efficiently approach it


 web-based cryptojacking
By taking as an example the 10 most profitable sites that hold mining code, the researchers estimated that they are able to generate between 0.53 and 1.51 Monero per day, i.e., between 119 to 340 USD (at the time). While it’s not much, given that the revenue is achieved without any cost to the miner, this is still a notable profit. “However, we conclude that current cryptojacking is not as profitable as one might expect and the overall revenue is moderate,” the researchers noted. How to stop it? The researchers found that existing blacklist-based approaches used by web browsers are trivial to evade and the actual lists outdate fast. Instead of static blacklists, they leveraged a set of heuristic indicators for candidate selection and a dedicated performance measurement step for precise miner identification. But, however suitable this approach is, they pointed out that it likely works well only because today’s mining operators don’t anticipate it. As the only reliable indicator of active mining is prolonged and excessive CPU usage, their advice for browser makers is to implement CPU allotments for tabs.


artificial intelligence / machine learning Another sticking point the panel discussed was the issue of maturity. That is, organizations have to ask themselves whether they truly have the ability to define, develop and manage their AI investments in a way that will create value. After all, AI isn’t some piece of plug-and-play software you can just flip on and start using. There are significant process changes that need to occur, in technology systems and human employees alike. Security should also be of chief concern. AI’s impact on security can be profound, which means you must determine what controls and protections will be necessary from the very beginning to ensure your sensitive data (sources and outcomes) remain secure. When there’s confusion and disagreement over how to proceed, it can lead to a case of analysis paralysis. So before charging full steam ahead with AI, companies should realistically assess their own readiness to do so. Thankfully, the IP Soft AI Pioneers Forum is now working to develop a universal AI maturity model that may be helpful to companies in these cases.


Focusing on machine learning 2020: augmentation instead of automation


The holy grail of augmentation can be easily seen as the pursuit of creativity but there are many other areas of interest as well. Strategic decision making, such as choosing where to build new skyscrapers, where to build new infrastructure (bridges, roads, facilities), what type of aircraft should we buy to maximize profitability and growth and what routes should we fly —counting in sustainability. These questions are still largely thought out with excel sheets, BI-tools and GIS-systems, and maybe some legacy statistics software (SAS, SPSS) with some custom analysis. While that may be sufficient for some industries, many of these problems have so many attributes that it’s impossible for us as humans to make optimal decisions — hence welcoming optimization and machine learning to help as augmenting features of decision making. And despite the fact that it’s still quite early to tell, deep learning may well be of use here



Quote for the day:

"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis

Daily Tech Digest - September 01, 2018

Human intelligence and AI are vastly different — so let’s stop comparing them
Let’s start with the data part. Contrary to computers, humans are terrible at storing and processing information. For instance, you must listen to a song several times before you can memorize it. But for a computer, memorizing a song is as simple as pressing “Save” in an application or copying the file into its hard drive. Likewise, unmemorizing is hard for humans. Try as you might, you can’t forget bad memories. For a computer, it’s as easy as deleting a file. When it comes to processing data, humans are obviously inferior to AI. In all the examples iterated above, humans might be able to perform the same tasks as computers. However, in the time that it takes for a human to identify and label an image, an AI algorithm can classify one million images. The sheer processing speed of computers enable them to outpace humans at any task that involves mathematical calculations and data processing. However, humans can make abstract decisions based on instinct, common sense and scarce information. A human child learns to handle objects at a very young age. For an AI algorithm, it takes hundreds of years’ worth of training to perform the same task.



What is Industry 5.0?


The handshake between a human being and a robot symbolized of the new reality, even by knowing that it will not be the reality in the future, as most automation, machine intelligence and even robots are working in the background, supporting the workforce or taking on large portions of work, like in production and manufacturing. Investment banking systems are already in use since more than a decade to negotiate and define the share price and sell- / buy-decisions within Nano-seconds independent form any human interaction. The next wave of industrial revolution needs to define, how we collaborate and how we define the rules between human and machine interaction. When artificial intelligence is taking decisions, like we could see in an impressive example during Google I/O 2018 presented by Sundar Pichai, CEO of Google, where a voice assistant called to make an appointment and the woman answering the call didn’t have a chance to recognize, that she was speaking to a robot.


Why Cybersecurity Is Becoming A Top-Priority Investment


Using tools like Privnote is one way to securely transfer valuable data. Privnote is a platform that securely transfers data online and then self-destructs. For protecting large amounts of data, the smartest way to go about finding the right cybersecurity company is to ask around for referrals. You’re better off doing this than making a blind Google search and hoping for the best. If a cybersecurity company is good enough for your colleagues and peers, then it will likely be good enough for your business. My business develops engaging content that attracts the millennial generation, which means we launch a considerable amount of online advertising campaigns. Some of these campaigns require creating B2B accounts with other platforms, so I’m not only protecting my clients’ information, but also my own. Additionally, your product itself needs to be protected. Cyber thieves will try to steal your products’ Amazon standard identification number code and profit from your online sales.


Empowering executives with data security effectiveness evidence

Your leaders are making decisions predicated on these non-security measures every day to increase value for their shareholders, address stakeholder requirements, and mitigate business risks. Security is simply another variable in the business risk equation. In fact, your security program isn’t about security risk in and of itself, but rather, the financial, brand, and operational risk from security incidents. One area where the need for security effectiveness evidence is profusely obvious is around rationalization. For example, many auditors no longer ask, “Do you have security tools in place to mitigate risk?” because the answer is always, “Yes, but we need more tools, training, and people anyhow.” Now auditors are asking for rationalization in terms of, “Can you prove, with quantitative measures, that our security tools are adding value? And can you supply proof regarding the necessity for future security investment?”


Using Neuroscience to Make Feedback Work and Feel Better


Modern humans base their decisions on many of the same pro-social, consensus-building impulses. We make polite chitchat at work, even in our most antisocial states, so others will see us as friendly. We avoid talking to the attractive stranger at the bar because something deep and ancient in us registers the possibility of rejection as a matter of life and death. When neuroscientists conduct brain scans of people exposed to social threats, such as a nasty look or gesture, the resulting images look just like the scans of people exposed to physical threats. Our bodies react in much the same ways. Our faces flush, our hearts race, and our brains shut down. No matter if we’re giving a speech to thousands or coming face-to-face with a jungle cat, our body’s response is the same: We want out. Feedback conversations, as they exist today, activate this social threat response. In West and Thorson’s study, participants’ heart rates jumped as much as 50 percent during feedback conversations.


Big Data And ML: A Marriage Between Giants!


We live in an age where ‘information’ is packaged, shared and valued, quite literally, more than anything else! And, there is enhanced engagement in this information exchange. All this activity is resulting in tons of data being pumped out — Big Data. To those listening, this data can be harnessed and mined for answers. Whether it is regarding business profitability, marketing strategy or identifying and mitigating risk, companies can ascertain any and every detail. Aiding in these pursuits is the growing computational power of systems. There is abundant storage available for all the data. In-memory is adding to the speed of performance. Cloud and pay-as-you-go models are making engagements feasible. And, the economies of scale are making these systems highly accessible and affordable. High-tech companies, technological corporations, and data scientists, all, predict the remarkable, dominant and disruptive power of ML and Big Data combined.


Confronting the Greatest Risks To Financial Services’ Future

In a behavioral study done among international bankers, it was found that bank executives take significantly less risk when reminded of their role as bankers. In the study, they invested about 20% less in the risky asset category relative to the control group. In other words, when they were ‘acting in a ‘banker mentality’ – reminded about banking, and their bank, and their banking careers – they will be more conservative than they would otherwise be. When the same people were not reminded of their banker role, they took greater risk, indicating that the risk in banking doesn’t come from culture but from structure. The question become, is there something about the culture and structure of banks that makes bankers risk-averse? Or is this something that is just evident now? From my perspective, I have seen that “bankers being bankers” tends to result in lower acceptance of change; an adherence to legacy policies, processes, and thought patterns; and the resultant risk of not being able to keep up with consumer demands.


Thinking outside-of-the-black-box of machine learning


“Speech separation or overlapped speech recognition is paramount for far-field conversational speech recognition,”, said Yoshioka. “It has a wide range of potential applications, such as meeting assistance and medical dialog transcription. As computers begin to sense the world better and get smarter, they will be able to provide us more effective assistance and help us focus on more important things.” In the accompanying paper titled, “Layer Trajectory LSTM”, Microsoft AI researchers Jinyu Li and fellow researchers Changliang Liu and Yifan Gong, successfully reassessed the potential for innovation in traditional time-based LSTM networks. Jinyu Li described his conceptual approach saying, “Sometimes deep learning is treated as a black box and researchers just keep trying different model structures without taking a couple of steps back and thinking about why the models work – and what else might be possible.” Traditional LSTM networks in recurrent neural networks (RNNs), well-suited to classifying and making predictions based on time series data such as speech


Eclipse Releases Versions 1.4 and 2.0 of MicroProfile

Both of these Eclipse projects have merit and are making progress in their respective domains, with MicroProfile technologies building upon those being contributed to Jakarta EE. But are the projects themselves ready to be merged? IMHO, no. MicroProfile has grown tremendously from its humble beginnings. We have several new component features and versions that extend the Enterprise Java programming model for microservices development. And we have done this in a relatively short amount of time: Six major MicroProfile releases with sixteen component releases in less than two years. Due to the enormity and complexities of this move, Jakarta EE is not yet ready to match this rate of progress. And, as Jakarta EE has not yet completed the definition of its specification process, it is not yet ready to accept the fast-paced release cycle required by MicroProfile. The big difference here is that MicroProfile has never tried to be a standards body.


Think AI Is Too Scary? This Expert Wants to Calm Your Fears


The first thing to tell you is that I really see this as a listening experience, at least initially, so I can be responsive to what the community is looking for. Having said that, one big area is to enhance and strengthen AAAI links with industry. Our annual conference has a lot of participants from industry but I'd like to see more presence from industry research labs. Traditionally it's been a very academic conference but today, many professors spend time in industry. We need to give that sector a lot more presence. That's a major focus. I am also looking to include underserved communities in our membership to diversify it strongly; launch K-12 initiatives to grow the pipeline; and ensure we include professionals in other areas. ... We need to look at employing ethics within AI at every level: how systems need to be designed with different mechanisms to respond ethically to events; understand when an AI system could do harm; and so on.



Quote for the day:


"The great leaders have always stage-managed their effects." -- Charles de Gaulle


Daily Tech Digest - August 31, 2018

IoT gets smarter but still needs backend analyticsThe difference between doing analytics completely on an endpoint device or partially on a device is an important one, according to Gartner research vice president Mark Hung. At the core, the analytics done by IoT implementations is about machine learning and artificial intelligence, letting systems take data provided by smart endpoints and fashion it into actionable insights about reliability, performance, and other line-of-business information automatically. Applying the lessons learned from sophisticated ML is easy enough, even for relatively constrained devices, but some parts of the ML process are much too computationally rigorous to happen at most endpoints. This means that the endpoints themselves don’t change their instructions, but that they provide information that can be used by a more powerful back-end to customize a given IoT implementation on a per-endpoint basis. The case of video analytics for smart city applications like traffic monitoring – using a system where the cameras themselves track pedestrians and motorists, then score that data against a centrally-created AI model – is an instructive one.


The anatomy of fake news: Rise of the bots

Spreading misinformation has become a mainstream topic to the extent that even the term ‘Twitter bot’ is a well-recognised term establishing itself into the modern lexicon. Whilst the term is well known, it can be argued that the development and inner workings of Twitter bots are less well understood. Indeed, even identifying accounts that are attributed to being a bot is considerably more difficult, and with good reason since their objective to appear as legitimate interactions require constant refinement. This continuous innovation from botnet operators are necessary as social media companies get better at identifying automated accounts. A recent study conducted by SafeGuard Cyber analysed the impact and techniques leveraged by such bots, and in particular looked at bots attributed to Russian disinformation campaigns on Twitter. The concept of bot armies is challenged in the research, of the 320,000 accounts identified the bots were divided into thematic categories presenting both sides of the story.


How to retrofit the cloud for security: 2 essential steps

How to retrofit the cloud for security
Identity and access management (IAM) can be retrofitted after a cloud migration without a lot of effort. While it depends on the IAM system you use, the native IAM systems found in clouds such as Amazon Web Services and Microsoft Azure are typically both a better choice and a quicker choice. At the end of the day, of course, it’s your particular requirements that will determine your choice of IAM. Keep in mind that IAM systems depend on directory services to maintain identity and to provide the proper authorization to those identities. You must deploy one of those systems if you don’t already have one. Also, keep in mind that IAM is only of value if all applications and data are included in the system, both in the cloud and on-premises. I’m not a fan of shortcuts when it comes to cloud computing security. However, reality sometimes makes these shortcuts a necessary evil. The result is not as good as if security were integrated from the start. However, if security was not implemented, most data and applications are at risk for hackery.


Why Everyone’s Thinking About Ransomware The Wrong Way

Bad-themed crypto ransomware
If you think your IT systems are the target of ransomware, you’re not alone. But you’re also not correct. Your IT systems are just the delivery mechanism. The real target is your employees. Ransoms rely on psychological manipulation that IT systems aren’t susceptible to (AI isn’t there just yet). The systems are the prisoner being held for money. The psychology of ransomware is complex, and the two main types — locker and crypto — use different tactics and are successful within different populations of people (more on this later). It’s not just a case of getting your workforce to abide by security rules and keep their eyes open for dodgy ransom notes (this just helps prevent the data and system from becoming prisoners). You must recognize their unique psychological susceptibilities and design work practices that prevent individuals within your workforce from becoming attractive targets. As mentioned above, ransomware uses complex psychological tactics to get their targets to pay. The two main types of ransomware play off different psychological vulnerabilities.


Here's what two executive surveys revealed about blockchain adoption

blockchain code record coding
Rajesh Kandaswamy, a Gartner fellow and chief blockchain researcher, had a more sobering analysis of blockchain adoption, saying that while interest among enterprises is high, actual deployments are rare. Even when enterprises do perform proof of concept projects, they're often rolled out under pressure from executives who want to do "something" with blockchain. "Most industries are not close to adoption, and even when they do, they do limited activity to test the technology, not as much because of a strong business case," Kandaswamy said via email. A Gartner CIO survey released in May revealed that fewer than 1% of more than 3,100 respondents had rolled out production blockchain systems. Gartner has since completed a second survey whose numbers have yet to be released, but adopters remain low, Kandaswamy said. ... "The challenge for CIOs is not just finding and retaining qualified engineers, but finding enough to accommodate growth in resources as blockchain developments grow," Gartner Research vice president David Furlonger stated in the report.


Android 'API breaking' vulnerability leaks device data, allows user tracking

All versions of Android, including OS forks -- such as Amazon's Kindle FireOS -- are believed to be affected, potentially impacting millions of users. The cybersecurity firm initially reported its findings to Google in March. ... The patch was confirmed in early August, leading to the public disclosure of the vulnerability. Google has fixed the security flaw in the latest version of the Android operating system, Android P, also known as Android 9 Pie. However, the tech giant will not fix prior versions of Android as resolving the vulnerability "would be a breaking API change," according to the cybersecurity firm. Earlier this month, Google announced the launch of Android 9 Pie, which is already rolling out to Android users on some devices. Android devices manufactured by vendors including Nokia, Xiaomi, and Sony will receive the updated OS by the end of fall. The update includes new gesture navigation, themes, and adaptive settings for screen brightness and battery life, among others. Users able to upgrade to Android 9 are encouraged to do so.


Chip shrinking hits a wall -- what it means for you

Chip shrinking hits a wall -- what it means for you
“The vast majority of today’s fabless customers are looking to get more value out of each technology generation to leverage the substantial investments required to design into each technology node. Essentially, these nodes are transitioning to design platforms serving multiple waves of applications, giving each node greater longevity. This industry dynamic has resulted in fewer fabless clients designing into the outer limits of Moore’s Law,” said Thomas Caulfield, who was named CEO of GlobalFoundries last March, in a statement. Making the move to a new process node is no trivial matter. It takes billions to drop one size in process technology. What Caulfield is saying is there are fewer customers for such bleeding-edge manufacturing processes, so the return on investment isn’t there. “I think we’ve reached a change in Moore’s Law. Moore’s Law is an economic law: that we reduce the cost of transistors with each generation. We will still reduce the size of the transistor but at a slower rate,” said Jim McGregor, president of Tirias Research, who follows the semiconductor industry.


No-code and low-code tools seek ways to stand out in a crowd


A suite of prebuilt application templates aim to help users build and customize a bespoke application, such as salesforce automation, recruitment and applicant tracking, HR management and online learning. And a native mobile capability enables developers to take the apps they've built with Skuid and deploy them on mobile devices with native functionality for iOS and Android. "We're seeing a lot of folks who started in other low-code/no-code platforms move toward Skuid because of the flexibility and the ability to use it in more than one type of platform," said Ray Wang, an analyst at Constellation Research in San Francisco. "People want to be able to get to templates, reuse templates and modify templates to enable them to move very quickly." Skuid -- named for an acronym, Scalable Kit for User Interface Design -- was originally an education software provider, but users' requests to customize the software for individual workflows led to a drag-and-drop interface to configure applications.


Will Google's Titan security keys revolutionize account security?

img2713.jpg
Titan security keys use the FIDO Universal Second Factor (U2F) protocol, which relies on public key cryptography. Adding a Titan device to an account ties a public encryption key to that account, which is verified against a private key using a cryptographic signature supplied by the Titan device during login. Titan keys also protect against phishing attacks from fake login portals—even with a compromised password a Titan-enabled account is still protected. When a user logs in to a fake portal, Google said, the key will know that it isn't a legitimate website and will stop the login process immediately. Don't assume that Titan keys are only usable with Google accounts—the FIDO protocol is a popular one that works with a multitude of websites and applications. Any website that supports U2F will work with a Titan key. Titan hardware is also built to be secure—Google designed the devices around a secure element hardware chip that contains all the necessary firmware for it to function, and all of that information is sealed in during the manufacturing process, as opposed to being installed afterward.


DDD With TLC


When introducing DDD to a new team, start with bounded contexts – breaking down big problems into small, manageable, solvable problems. But leave out the terminology and just start doing it. Understanding the dynamics of a team in order to successfully coach them has a lot to do with instinct and empathy. It’s so important to listen carefully, be respectful, non-judgmental and to be kind. People resist DDD because they believe it is too much to learn or is too disruptive to their current process. Solving small problems is a good approach that can gain trust in adopting DDD. Domain modeling is an art, not a science, so it’s not uncommon to run into a wall and circle back or even have a revelation that makes you change direction. Teams benefit from encountering that with a coach who is familiar with modeling and is not worried about the perspective changing while you are going through the process.



Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks

Daily Tech Digest - August 30, 2018

Companies are not focusing enough on machine identities, says study image
We spend billions of dollars protecting usernames and passwords but almost nothing protecting the keys and certificates that machines use to identify and authenticate themselves. The number of machines on enterprise networks is skyrocketing and most organisations haven’t invested in the intelligence or automation necessary to protect these critical security assets. The bad guys know this, and they are targeting them because they are incredibly valuable assets across a wide range of cyber-attacks. According to the study, Securing The Enterprise With Machine Identity Protection: Newer technologies, such as cloud and containerisation, have expanded the definition of a machine to include a wide range of software that emulates physical machines. Furthermore, these technologies are spawning a tidal wave of new, rapidly changing machines on enterprise networks.



The Evolution of IoT Attacks


In addition to the evolution of IoT devices, there has been an evolution in the way attacker’s think and operate. The evolution of network capabilities and large-scale data tools in the cloud has helped foster the expansion of the IoT revolution. The growth of cloud and always-on availability to process IoT data has been largely adopted among manufacturing facilities, power plants, energy companies, smart buildings and other automated technologies such as those found in the automotive industry. But this has increased the attack surfaces for those that have adopted and implemented an army of possible vulnerable or already exploitable devices. The attackers are beginning to notice the growing field of vulnerabilities that contain valuable data. In a way, the evolution of IoT attacks continues to catch many off guard, particularly the explosive campaigns of IoT based attacks. For years, experts have warned about the pending problems of a connected future, with IoT botnets as a key indicator, but very little was done to prepare for it. Now, organizations are rushing to identify good traffic vs malicious traffic and are having trouble blocking these attacks since they are coming from legitimate sources.


Microservices development will fail with monolithic mindset


Effective microservices development requires organizational change that goes beyond simple, single-team DevOps, said Brian Kirsch, an IT architect and instructor at Milwaukee Area Technical College. Without an overarching DevOps infrastructure across all projects, too many enterprises have created siloed DevOps mini-teams, each producing hundreds of microservices. It's not possible to create a cohesive product when each team works independently and doesn't know what others are doing, Kirsch said. An important practice for organizations moving to microservices is to standardize development tools, frameworks and platforms. Standardization prevents overspending on tools and training and discourages expertise silos and competition for resources. In siloed development, each team in a company often uses its own preferred technology. This reduces engineering resources, because developers may lack skill sets needed to switch teams or substitute on a team using another technology, Kirsch said.


Top 9 Data Science Use Cases in Banking


Banks are obliged to collect, analyze, and store massive amounts of data. ... Nowadays, digital banking is becoming more popular and widely used. This creates terabytes of customer data, thus the first step of data scientists team is to isolate truly relevant data. After that, being armed with information about customer behaviors, interactions, and preferences, data specialists with the help of accurate machine learning models can unlock new revenue opportunities for banks by isolating and processing only this most relevant clients’ information to improve business decision-making. Risk modeling is a high priority for investment banks, as it helps to regulate financial activities and plays the most important role when pricing financial instruments. Investment banking evaluates the worth of companies to create capital in corporate financing, facilitate mergers and acquisitions, conduct corporate restructuring or reorganizations, and for investment purposes.


Improving security is top driver for ISO 27001


“Unfortunately, as long as cyber crime remains a lucrative trade, risks will continue to escalate and attackers will continue to proliferate,” said Alan Calder, founder and executive chairman of IT Governance. “To counter this, organisations need to be fully prepared. ISO 27001, an information security standard designed to minimise risks and mitigate damage, offers the preparedness that organisations need.”  Other top reasons for implementing ISO 27001 include gaining a competitive advantage (57%), ensuring legal and regulatory compliance (52%) and achieving compliance with the EU’s General Data Protection Regulation (GDPR), which was cited by 48% of respondents. According to IT Governance, ISO 27001 provides an excellent starting point for achieving the technical and operational measures required by the GDPR to help mitigate data breaches. Closely in line with the drivers for implementing ISO 27001, improved information security was by far the greatest advantage afforded by achieving certification, according to 89% of respondents.


NSX technology shifts virtual administrator responsibilities


NSX technology, and network virtualization broadly, lives at the kernel on each of the hosts. It has to exist at this level to have access to the traffic it needs without affecting the performance of the VMs. This means it's a host extension, and it falls on the virtual admin to ensure installation and functionality. After that's complete, however, the responsibilities can shift to different people. The functions of firewall and router rules haven't changed just because the environment has moved from physical to virtual, which implies these functions remain the network engineers' responsibilities. The network engineers still have relevant, specialized knowledge, but these rules are often generated automatically based on the VM deployment. Network mapping software, such as vRealize Network Insight, can offer additional complexity. Network engineers and virtual admins can both use these tools to examine the virtual network, ensure functionality and minimize risk before establishing a software-defined network.


What is CUDA? Parallel programming for GPUs

What is CUDA? Parallel programming for GPUs
Without GPUs, those training runs would have taken months rather than a week to converge. For production deployment of those TensorFlow translation models, Google used a new custom processing chip, the TPU (tensor processing unit). In addition to TensorFlow, many other DL frameworks rely on CUDA for their GPU support, including Caffe2, CNTK, Databricks, H2O.ai, Keras, MXNet, PyTorch, Theano, and Torch. In most cases they use the cuDNN library for the deep neural network computations. That library is so important to the training of the deep learning frameworks that all of the frameworks using a given version of cuDNN have essentially the same performance numbers for equivalent use cases. When CUDA and cuDNN improve from version to version, all of the deep learning frameworks that update to the new version see the performance gains. Where the performance tends to differ from framework to framework is in how well they scale to multiple GPUs and multiple nodes.



NASA to use data lasers to beam data from space to Earth

NASA to use data lasers to beam data from space to Earth
Laser is not as easy as radio, though, NASA explains. That’s partly because the Earth’s rotation, coupled with the amount of time it takes data to reach the ground station from the spacecraft — albeit faster than radio — means tricky timing calculations are needed to determine where the narrower laser needs to hit. Traditional radio simply needs a data dump, from space, in the vicinity of the ground receiver, whereas laser needs to be continually connected during the transmission. The agency intends to employ a special locking, pointing mechanism. The idea is that a pre-scheduled passing craft’s telescope picks up a finder-signal sent from the ground station. That allows the transmitter to lock on. Mirrors in the spacecraft’s laser modulator are driven by sensors, and they send the beam. Using the LCRD, NASA is aiming for a 1.24 Gigabits per second, geosynchronous-to-ground optical link with two ground stations. The first flight, run by NASA's Goddard Space Flight Center in Greenbelt, Maryland, is expected to take place next year.


Want a CIO role? Here are the top skills you need and how to get there

While technical skills are more critical, that doesn't necessarily mean executive teams are looking for a former programmer or network engineer to fill their CIO role. A CIO must appreciate the balance between the hype/promise of new technologies and the reality of business, Inuganti said. Despite the need for technical skills, making a jump into leadership at the CIO level requires a deep understanding of the business. CIO candidates must understand the metrics that drive the business, what competitors are doing, and more, Inuganti said. The market previously went too far to the business side of things, but with the growth of cloud, big data, artificial intelligence (AI) and other technologies, it is requiring more technical skills. In terms of what skills are currently hot, data was always there, Inuganti said, but skills around data analysis are growing in desirability for CIOs. He said it's the hottest commodity in the market today, based on what he has seen with executive searches.


Inside the world's most prolific mobile banking malware

The malware's ability to read messages also means it can intercept text messages from the bank containing one-time passwords, helping the attackers to steal from accounts that use additional security. In addition, Asacub ensures the user can't check their mobile banking balance or change any settings because the permissions it has been given enables it to prevent the legitimate banking app from running on the phone. The attacks might seem basic, but they still work, and Kaspersky figures say Asacub currently accounts for 38 percent of mobile banking trojan attacks "The example of the Asacub Trojan shows us that mobile malware can function for several years with minimal changes in its distribution pattern," Shishkova told ZDNet. "One of the main reasons for this is that the human factor can be leveraged through social engineering: SMS-messages look like they are meant for a certain user, so victims unconsciously click on fraudulent links. In addition, with regular change of domains from which the Trojan is distributed, catching it requires heuristic methods of detection," she added.



Quote for the day:


"The People That Follow You Are A Reflection Of Your Leadership." -- Gordon TredGold

Daily Tech Digest - August 29, 2018

women in it programmer devops reflection monitor glasses by angelos michalopoulos unsplash
More than often, the network will be a multi-vendor, consisting of numerous domains with operational and architectural teams operating in silo. Computer networks are complex, and this complexity can be ’managed out’ by introducing a framework that abstracts the complexity. Lowering complexity and getting things right in a standardized way introduces you to the world of automation. Within a network, there are elements that used to be either easy or hard to automate. Here, one should not assume that if something is easy to automate, we should immediately dive in without considering the easy-versus-impact ratio. Operating system upgrades are easy to automate but have a large impact if something goes wrong. No one wants to live in a world of VLANs and ports. Realistically, they have a relatively low impact with a very basic configuration that needs to be on every switch. This type of device level automation is an easy ramp to automation as long as it does not touch any services.


The bright future of machine learning

Man and Machine collaboration
Mind+machine partnerships are useful in any situation where you’re dealing with a large amount of data, constrained time, or the need for continuous coaching or training. Using machine partners can help us make better choices, decisions and products, with a key advantage being that machines can work on demand, while another human might not be able to learn or respond as quickly. An example I use in class is the use of mind+machine to improve food safety at restaurants. Computers use pre-existing historical data to predict when to inspect restaurants. The computer can identify a restaurant that’s likely to have a violation, and then human inspectors can follow up. And this partnership has done a much better job — inspectors were getting to restaurants that had critical violations about a week earlier than they would have if they had just gone with the normally generated schedule.


This Entrepreneur Shares The Focus Strategy That Helped Him Build an App Used by Millions

Moving, with all the attendant logistical headaches and emotional investment, can be one of the most stressful things you can experience. When your living situation is in flux, it can affect every aspect of your life. And that’s before you factor in trying to find a stranger, or an untested friend or acquaintance, to split the costs with you. This is the problem that Ajay Yadav wants to solve with his company Roomi. He founded the startup in 2015 to help people looking for roommates connect with people who actually are who they say are. Users of Roomi sign up for the service by completing a background check that includes ID verification and social media accounts. If the prospective roommates think they have a match, they can plan to meet through a secure in-app messaging platform. Since launching in New York City three years ago, the company now operates in more than 20 cities, acquired four companies, raised $17 million in funding and has a user base of 2.4 million.


Stop Talking Gobbledygook to the Business

Image: Pixabay
For explaining or defending a machine’s good decisions and fixing the bad ones, you’ll want to be able to see scored machine learning output for each record combined with the top variable value details in plain business language that most influenced the predicted outcome. For example: Record ID 232333 was predicted to be a high value customer because of size greater than 10,000 employees, monthly spend between $1M and $1.5M, and so on for relevant decision influencing input variables. To start earning stakeholder trust early on in your machine learning projects, share intermediate reports such as top outcome influencers that can be invaluable to the line of business. Machine learning can rapidly narrow down the scope of potential variables that matter most when faced with hundreds or thousands of variables to analyze.  As you create models, visually share progress and insights on where your model is accurate and where it makes mistakes using scatter plots, combination charts and interactive data visualization tools.


Dell Latitude 7490 review: A solid business all-rounder

There is a good range of ports and connectors, including a Smart Card reader, NFC sensor and fingerprint reader -- the latter two located on the wrist rest. The Smart Card reader sits on the front left edge, where there are also two USB 3.1 ports, a full size HDMI connector and a USB Type-C port with DisplayPort and Thunderbolt 3. The large, round power jack is at the back of this edge. Meanwhile the right edge offers a headset jack, a MicroSD card reader, a SIM slot, a third USB 3.1 port and an Ethernet port with a spring-out base that means it can be accommodated easily in the chassis. The pop-out SIM card caddy is perhaps a little vulnerable, though it's about as invisible as it could be, nestled at the bottom of the right edge. It accommodates a Micro-SIM rather than a Nano-SIM. It's nice to see a MicroSD card slot here, although full-size SD would be welcome too. My review sample performed well. Simultaneous writing into a web app, audio streaming and 20-plus Chrome tabs opened across two application windows presented it with no difficulties at all.


The GDPR And The B2B Seller: Keep Calm And Sell On


B2B sellers are struggling to engage empowered B2B buyers — those traveling on self-directed journeys — who are raising the bar for more insight, more co-creation, and more creativity. Piled on top of these challenges, GDPR seems like the whim of a capricious god in a cosmic smackdown, throwing more obstacles in the way of sales representatives. However, many of the identified seller pain points are actually ineffective tactics — a vestige of a bygone era — that are off-putting to customers and prospects who are fed up with a barrage of impersonal, non-purposeful, and irrelevant communications. GDPR prohibits selling methods that leverage nonconsensual use of personal data, and this new reality will ultimately be good for sellers willing to shift their behaviors. Sales will spend less time doing data entry and sending automatic emails and more time focusing on how they can help interested customers. One of the sales leaders we interviewed shared this sentiment: “For sales representatives to stay relevant, they need to stop automating things. This is just the tip of the iceberg for sales and marketing teams becoming more human.”


Are AI and “deep learning” the future of, well, everything?

Machine learning and deep learning have grown from the same roots within computer science, using many of the same concepts and techniques. Simply put, machine learning is an offshoot of artificial intelligence that enables a system to acquire knowledge through a supervised learning experience. It’s a straightforward enough process, in theory: a human being provides data for analysis, and then gives error-correcting feedback that enables the system to improve itself. Depending upon the patterns in the data it’s exposed to, and which of those it recognises, the system will adjust its actions accordingly. It's this ability to self-develop without the need for explicit programming, but rather to change and adapt when exposed to new data, that makes machine learning such a powerful tool. However, what makes deep learning even more valuable is that it does so without, or with much less, human supervision. David Wood, co-founder of Symbian and now a “futurist” at Delta Wisdom, explains the difference using the example of face recognition.


5 ways the World Economic Forum says AI is changing banking


“As products and services become more easily comparable and therefore commoditized, it’s not sufficient any more to compete on delivering credit quickly and at a good price, which have been the historic competitive levers” for banks, said Rob Galaski, Deloitte Global Banking and Capital Marketing Consulting leader and one of the authors of the report. For example, to keep its auto loan business relevant, Royal Bank of Canada is piloting a forecasting tool for car dealers to predict demand for vehicle purchases based on customer data. Such information could be more valuable to the dealers than any banking product, Galaski said. “We think that is an exemplar of how we see the industry changing overall,” he said. “Much of the AI debate coming into our work was around replacing humans and doing existing things better or faster. But that take on AI dramatically underestimates the impact. The very way we go about conducting business can be redesigned using AI.”


Excess data center heat is no longer a bug -- it’s a feature!

green data center intro
Developed by MIRIS in cooperation with architecture firm Snøhetta, Skanska, Asplan Viak, and Nokia, The Spark also requires urban data centers to be built in close proximity to the buildings hoping to use the excess heat. These kinds of urban locations may further increase costs and put practical limitations on the size of data centers that can take advantage of the concept. While smaller data centers are increasingly popular, they may not be able to achieve the economies of scale enjoyed by the largest facilities, which can run into the millions of square feet. In addition, depending on the time of day, the weather, and other factors, the heat generated by the data center may not always precisely match the needs of the surrounding community, either generating more heat than the local homes and businesses need, or requiring them to get additional heat from other sources. That’s why the Lyseparken implementation includes a stake in the local power company, Fast Company said, and will “produce and consume electricity from a mix of renewable sources, including solar and thermal energy. 


EU regulation will drive U.S. banks to embrace FinTech or lose market share

"We are already seeing the U.K.'s open banking initiative, which is based on but wider than PSD2, being explored in other markets, including in Central America, Asia and Africa. So it wouldn't be surprising to see similar developments in the U.S.," he added. Even before being pressured by PSD2, some European banks were embracing emerging digital technologies, such as real-time electronic payments; they often gained the technology either through partnerships with FinTechs or by acquiring them outright. "U.K. banks are not, at this stage, seeing FinTechs so much as competitors as they are seeing them as potential collaborators with whom they can develop new journeys, services and products," Chertkow said. "What is clear is that consumer behaviors are changing, particularly with younger generations. Traditional banks need to decide whether they want to maintain their existing business model and seek to differentiate it from the FinTechs or whether they need to respond by copying the best user experiences of the FinTechs."



Quote for the day:


"Commitment is the conviction that it's right to fight for what you want." --Tim Fargo


Daily Tech Digest - August 27, 2018

What are next generation firewalls? How the cloud and complexity affect them

network security digital internet firewall binary code
So far, nextgen firewalls vendors haven't been able to fully translate their features to the needs of cloud environments, says NSS Labs' Spanbauer. "This is a significant engineering feat, and we're not quite there yet with a perfect replica, virtualized or physical." However, they are taking advantage of other capabilities that cloud offers, including the real-time sharing of threat intelligence data. "If you're patient zero, then that's an incredibly difficult scenario to block against," he says. "However, if you give it a minute or two minutes, then patient 10 or 15 to 20, with real-time updates, can be protected by virtue of the cloud abilities of the firewall." There's also the possibility of nextgen firewalls expanding into the endpoint security space. "If they merged, that would be a lot easier for enterprises to manage," says Spanbauer. "But that's not going to happen." Perimeter protection and endpoint protection will remain distinct for the foreseeable future, but the two sets of technologies could mutually benefit one another, he says.



Modular Downloaders Could Pose New Threat for Enterprises

The threat actor behind the campaign — an entity that Proofpoint identifies as TA555 — has been distributing AdvisorsBot via phishing emails containing a macro that initially executed a PowerShell command to download the malware. Since early August, the attacker has been using a macro to run a PowerShell command, which then downloads a PowerShell script capable of running AdvisorsBot without writing it to disk first, Proofpoint said. Interestingly, since first releasing the malware in May, its authors have completely rewritten it in PowerShell and .NET. Proofpoint has dubbed the new variant as PoshAdvisor and describes it as not identical to AdvisorsBot but containing many of the same functions, including the ability to download additional modules. ... It is certainly unusual for malware authors to do so and may be an attempt to further evade defenses. "For the enterprise, more variety in the threat landscape and newly coded malware increase complexity for defenders and should be driving investments in threat intelligence, robust layered defenses, and end user education," she says.


Machine learning turns unstructured secondary storage into globally accessible data

cloud data warehouse
It’s important to note, particularly for security-minded organizations, that Cohesity isn’t aggregating the data, just the object metadata, which then points to where the data is. Now storage administrators can globally roll out policies or make upgrades across the multi-node environment with a single click. ... One of the biggest and underappreciated benefits of SaaS is the ability to aggregate data across multiple customers and compare the data. In one’s consumer life, think of Amazon providing recommendations such as “Customers that bought X also bought Y.” Cohesity can compare data and understand its utilization or backup frequency or other data management capabilities against its peers and then make the appropriate changes. Digital CIOs need to shed conventional thinking around storage and think more about globally accessible and optimized data. This becomes particularly important in the ML era, when the quality of data can make the difference between being a market leader or a laggard. In particular, secondary storage may be the biggest, wasted resource that a company has, and being able to harness the knowledge and insights captured in it could help organizations accelerate their digital transformation efforts.


Why do enterprises take a long time to install vital security updates

The failure to rapidly deploy and install security updates is placing businesses at greater risk of a targeted cyberattack, as hackers look to exploit the vulnerabilities of outdated systems. Kollective’s report also found that 37% of IT managers list ‘a failure to install updates’ as the biggest security threat of 2018. This makes outdated software a bigger threat than password vulnerabilities (33%), BYOA / BYOD (22%) and unsecured USB sticks (9%). Even more startling, 13% of large businesses have given up on actively managing software distribution, and are, instead, passively asking employees to update their own systems. Kollective blames the failure to install updates on a combination of slow testing procedures and an inability to distribute updates automatically at scale. As Dan Vetras, CEO of Kollective explains: “Following numerous corporate cyberattacks over the last 12 months, today’s businesses are spending more than ever before on enhancing and improving their security systems. But, this investment is wasted if they aren’t keeping their systems up-to-date.


Here comes ‘antidisinformation as a service’

zuckerberg mark cutouts capitol
Most of the disinformation accounts deleted by Facebook, Twitter, Google and Microsoft were discovered not by those companies or the U.S. government, but by a company called FireEye. I told you in this space last year about disinformation as a service (DaaS). Most of the Russian disinformation campaigns are carried out by a private company called the Internet Research Agency. But now comes AaaS — antidisinformation as a service. That’s what FireEye provided this week to the Silicon Valley social networking companies. It considers itself a kind of NSA for hire — an intelligence organization, but for enterprises. How does it do it? FireEye’s methodology is multifaceted and a trade secret. But the company’s core competencies lie in discovering hidden malware and network hacks with the use of proprietary technology to detect behavioral anomalies — behavior by code and websites that isn’t normal. Once it finds the general nature of the weird behavior, it then does a lot of shoe-leather research.


What IPv6 features can be found in the latest specification?


The core IPv6 specification -- RFC 2460 -- has changed considerably since it was first released. The new IPv6 features are geared toward reliability, as well as operational and security considerations. To that end, the revised spec contains a security analysis of IPv6, with references to some of the work that's been carried out during the last few years, particularly in the area of IPv6 addressing. Other enhancements target IPv6 extension headers and fragmentation. For example, the original IPv6 specification allowed overlapping fragments -- that is, fragments that covered the same chunk of data from the original unfragmented datagram. The use of overlapping fragments to circumvent security controls was already very popular in the IPv4 world. However, even when there was no legitimate use for them in the IPv6 world, overlapping fragments were still considered valid. Such fragments were eventually declared illegal by RFC 5722, which published in 2009. Thus, the new specification incorporates that update, banning overlapping fragments.


Microsoft, Salesforce plan to open source major enterprise software products

open source keyboard
Microsoft ultimately decided that ONE is too important to keep to itself. “We have decided that this is such an important resource for everybody that just hoarding it ourselves is not the right thing to do,” Bahl said. “So, we are making it available to the entire community so that they can now — and it’s not just for production systems, but also for students that are now graduating.” The software will help large enterprises improve their network uptime by simulating changes to their network before rolling them out live. Microsoft hasn’t disclosed where it plans to release ONE, but GitHub — which Microsoft is in the process of acquiring — seems the logical choice. TransmogrifAI is an automated machine learning library for structured data, which makes sense coming from Salesforce, since its CRM products are built on the traditional row-and-column structure of a relational database. It’s written in Scala and built on top of Apache Spark, Apache’s in-memory analytics software.


Microsoft ups effort to drive Surface Go adoption

Microsoft Surface Go
One of the most fascinating things about executive leadership in most technology firms is that they generally don’t get marketing. It doesn’t seem to be taught in engineering schools and even those that get business degrees either opt to not take those classes or didn’t understand what they were taught. The result is that, in general, marketing is underfunded and staffed by people that don’t understand the critical parts of human nature that form the foundation of successful marketing campaigns.  Apple, during Steve Jobs tenure, was my best example of a firm that truly got the power of marketing and that company rose to be the most valuable (in terms of market cap) company in the segment. This was even though for much of the time they have been largely a one product company (iPod to iPhone). They outspent everyone they competed with occasional exception of Samsung who only occasionally outspent Apple with powerful competitive results (they are taking regular shots at Apple).


10 common pitfalls that threaten data quality strategies


“Implementing a data quality strategy is not as simple as installing a tool or a one-time fix,” explains Patty Haines, president and founder of Chimney Rock Information Solutions, Inc., a consultancy that aids organizations in building business intelligence and analytics environments by providing data warehouse and business intelligence services, solutions and mentoring. “Organizations across the enterprise need to work together to identify, assess, remediate, and monitor data with the goal of continual data improvements.” Haines offers her advice on 10 top challenges to a successful data quality strategy. ... “If differences in the definition and use of data continue, it can allow poor quality data to be entered, managed and reported,” Haines says. “The data quality strategy must include the business community, data governance, and subject matter experts working together to determine consistent and agreed-upon definitions to improve the quality of data.”


Why Facebook is powerless to stop its own descent

You could certainly argue that Facebook's problems aren't all of its own making. It's a tool that people use in whatever ways they decide. The fact that humanity has used the social network to power a renewal of tribalism, nationalism, and bigotry is hardly a phenomenon that Facebook or anyone else would have predicted. The problem for Facebook is that it took so long to respond--and it only truly did so after the issue became a PR nightmare. It had the opportunity to step up and figure out where the line was between healthy dialogue and hate speech, and it passed the buck. It prioritized user growth and activity over creating a healthy platform. A crisis doesn't build character, it reveals it--as the aphorism goes. Facebook has lost credibility. Few believe that it can be a leader in solving a problem that it helped create. As a result, the narrative around Facebook as a company and a platform is that it doesn't look out for its users' best interests. It doesn't put them first. And so more people are tuning out.



Quote for the day:


"Defeat is not the worst of failures. Not to have tried is the true failure." -- George Woodberry