Daily Tech Digest - December 11, 2019

SR-What you need to know-image.png
Segment Routing uses a routing technology or technique known as source packet routing. In source packet routing, the source or ingress router specifies the path a packet will take through the network, rather than the packet being routed hop by hop through the network based upon its destination address. However, source packet routing is not a new concept. In fact, source packet routing has existed for over 20 years. As an example, MPLS is one of the most widely adopted forms of source packet routing, which uses labels to direct packets through a network. In an MPLS network, when a packet arrives at an ingress node an MPLS label is prepended to the packet which determines the packet’s path through the network. While SR and MPLS are similar, in that they are both source-based routing protocols, there are a few differences between them. One of these key differences lies in a primary objective of SR, which is documented in RFC7855, “The SPRING [SR] architecture MUST allow putting the policy state in the packet header and not in the intermediate nodes along the path.


Never Mind Consumers, This Was a Year of Steady Infrastructural Progress

Much of the traction that does not come from exchanges or trading has been generated decidedly in infrastructure layers in 2019. Node infrastructure provider Blockdaemon, having recognized the market’s propensity to proliferate new decentralized networks, is generating revenue across an impressive 22 such networks today and continues to grow month over month. The Graph is serving over 400 public smart contract subgraphs, with request volume clocking millions of daily data queries. Meanwhile, 3Box’s self-sovereign identity and data solution is rapidly integrating across the Ethereum ecosystem, within wallets like MetaMask and many of the new user onboarding solutions, like Portis and Authereum, and even governance experiment MolochDAO.  Blockchain’s road to mainstream adoption depends on institutional backing of businesses that support blockchain infrastructure and enable traditional investors both to capitalize and participate in digital asset networks. As such, the compliance levels of exchanges have been increasing to support institutional clients.


5G and Me: And the Golden Hour


The connected ambulance 5G network slicing concepts were demonstrated at the Mobile World Congress (MWC) in Barcelona, Spain in Feb 2019 by Dell EMC Cork Centre of Excellence (CoE). Network slicing is a type of virtual networking architecture similar to software-defined networking (SDN) and network functions virtualization (NFV) whose goal is software-based network automation. This technology allows the creation of multiple virtual networks on a shared physical infrastructure. ... The goal for the future of connected care in emergencies would be to identify the conditions for Stroke, CHF & MI; measure and score at site, predictively collect Electronic Medical Record (EMR) metadata in conjunction with specific image studies via DICOM (Digital Imaging and Communications in Medicine) and combine this with the metadata from disease-specific epidemiological studies for that geographic region — all within the “golden hour”. This combinatorial analysis at the “point of care” is the future and can prevent disability and death at scale — especially since not all the ambulance visits are emergencies.


Google proposes hybrid approach to AI transfer learning for medical imaging


In transfer learning, a machine learning algorithm is trained in two stages. First, there’s retraining, where the algorithm is generally trained on a benchmark data set representing a diversity of categories. Next comes fine-tuning, where it is further trained on the specific target task of interest. The pretraining step helps the model to learn general features that can be reused on the target task, boosting its accuracy. According to the team, transfer learning isn’t quite the end-all, be-all of AI training techniques. In a performance evaluation that compared a range of model architectures trained to diagnose diabetic retinopathy and five different diseases from chest x-rays, a portion of which were pretrained on an open source image data set, they report that transfer learning didn’t “significantly” affect performance on medical imaging tasks. Moreover, a family of simple, lightweight models performed at a level comparable to the standard architectures. In a second test, the team studied the degree to which transfer learning affected the kinds of features and representations learned by the AI models. They analyzed and compared the hidden representations in the different models trained to solve medical imaging tasks, computing similarity scores for some of the representations between models trained from scratch and those pretrained on ImageNet.


Robotic exoskeletons: Coming to a factory, warehouse or army near you, soon 

ford-exoskeleton1.jpg
Ford is thought to be one of the bigger users of exoskeletons to date, but other car makers are deploying exoskeletons, although several have opted for build-your-own rather than off the shelf systems. Hyundai debuted its own exoskeleton vest, the VEX, earlier this year. The back-worn exoskeleton "is targeted at production-line workers whose job is primarily overhead, such as those bolting the underside of vehicles, fitting brake tubes, and attaching exhausts", Hyundai said, and is expected to be rolled out at Hyundai plants. GM meanwhile has teamed up with NASA to create a robotic glove that can help increase the amount of force a wearer can exert when gripping an object or lifting up a piece of equipment for long periods, cutting the likelihood of strain or injury. Closer to home, the construction industry is also shaping up to be another significant user of exoskeletons. Builder Wilmott Dixon, for example, started piloting the ExoVest at a Cardiff site last year. One factor driving the rollout of exoskeletons in both the construction and auto industries is the possibility of cutting worker injuries as well as enabling skilled staff to work for longer.


What does it mean to think like a data scientist?

Art is a very important part of that, because what we find in a lot of our data science engagements is there's a lot of exploration of what might be possible, the realm of what's possible. So, we tried to empower the power of ‘might,’ right? That might be a good idea, that might be something, because if you don't have enough might ideas, you never have anything, any breakthrough ideas. And so, this art of thinking like a data scientist, this kind of says, 'Yeah, there's a data science process.' But think about it as guardrails, not railroad tracks. And we're going to bounce in between these things. And oh, by the way, it's really important that your business stakeholders, your subject matter experts, also understand how to think like a data scientist in this kind of non-linear creative kind of fashion, so you come up with better ideas. Because we're all in search of variables and metrics that might be better predictors of performance, right? And the data science team will have some ideas from their past experience. 



Teams are struggling to implement these new tools and 71 percent said that they are adding security technologies faster than they are adding the capacity to proactively use them. This added complexity is also compromising their threat response with 69 percent of security decision makers surveyed saying that their security team currently spends more time managing security tools than effectively defending against threats. To make matters worse, a majority of enterprises are less secure today as a result of security tool sprawl and over half (53%) say their security team has reached a tipping point where the excessive number of security tools in place adversely impacts their organization's security posture. ReliaQuest's CEO, Brian Murphy provided further insight on the report's findings, saying: "Cyber threats continue to rise and require companies to mitigate risk. While it's tempting to think another piece of technology will solve the problem, it's far from true -- in fact, this survey proves more tools can worsen enterprise security by adding complexity without improving outcomes.


There’s No Opting Out of the California Consumer Privacy Act

For starters, GDPR applies to all European data but is a minimum requirement. Individual countries in the EU have their own laws that are often more restrictive. Alternatively, CCPA is applicable to California data only and excludes any data that is already covered by a federal law, such as HIPAA or GLBA. While GDPR protects personal information (PI) that could potentially identify a specific individual -- including name, address, telephone number and Social Security number (SSN) -- CCPA goes beyond to include product purchase history, social media activity, IP addresses, and household information. Under CCPA, companies are required to include a single, clear and conspicuous "Do Not Sell My Personal Information" link on homepages. Alternatively, GDPR offers various opt-out rights, each of which requires individual action.  Under GDPR, administrative fines can reach 20 million euros or 4% of annual global revenue, whichever is greatest. For CCPA, the California Attorney General can fine companies $2,500 per violation or up to $7,500 for each intentional violation.


Google Chrome can now warn you in real time if you're getting phished


Between July and September, Google sent more than 12,000 warnings about state-sponsored phishing attacks targeting its users in the US. According to Verizon's annual cybersecurity report, phishing is the leading cause of data breaches, and Google said in August that it blocked about 100 million phishing emails every day. But phishing links don't just come in emails: They can also appear in malicious advertisements, or through direct messages on chat apps. For those of you using a Chrome browser, Google is launching an extra level of protection against phishing through real-time checks on site visits. You can turn it on by enabling "Make searches and browsing better" in your Chrome settings. This protection was already available for Chrome's Safe Browsing mode, which checked the URL of every website visited and made sure it was not on Google's block list. The block list is saved on devices and only synced every 30 minutes, allowing savvy hackers to bypass the filter by creating a new phishing URL before the list updates.


Big Changes Are Coming to Security Analytics & Operations

Nearly two-thirds (63%) of survey respondents claim that security analytics and operations are more difficult today than they were two years ago. This increasing difficulty is being driven by external changes and internal challenges. From an external perspective, 41% of security pros say that security analytics and operations are more difficult now due to rapid evolution in the threat landscape, and 30% claim that things are more difficult because of the growing attack surface. Security teams have no choice but to keep up with these dynamic external trends. On the internal side, 35% of respondents report that security analytics and operations are more difficult today because they collect more security data than they did two years ago, 34% say that the volume of security alerts has increased over the past two years, and 29% complain that it is difficult to keep up with the volume and complexity of security operations tasks. Security analytics/operations progress depends upon addressing all these external and internal issues.



Quote for the day:


"Growth is painful. Change is painful. But nothing is as painful as staying stuck somewhere you don't belong." -- Mandy Hale


Daily Tech Digest - December 10, 2019

Internet of the Senses is on the horizon, thanks to AR and VR


While smell cannot be conveyed digitally, that will change, with smell becoming an online experience by 2030, the report found. More than half (56%) of respondents said technology would evolve to the point that they would be able to smell scents in films. This same application will be applied to sales as retailers market products commercially with smell, the report found, meaning perfume commercials could emit a scent. Along the same lines as smell, humans will also be able to experience taste through devices, according to the report. Nearly 45% of respondents believe that in the next 10 years, a device could exist that digitally enhances the food someone eats. This advancement could have significant impacts on health and diet, allowing people to eat healthier foods that taste more savory than they are. This application presents another opportunity for marketing retail, as consumers could taste food products. People viewing cooking programs could even taste the food that is on screen, the report found. More than half (63%) of respondents said smartphone users would be able to feel the shape and texture of digital icons.



4 Authentication Use Cases: Which Protocol To Use?

silver platter passwords exposed authentication hacked vulnerable security breach
Where strong security is a requirement, SAML is generally a good choice. All aspects of the exchange between the RP and IdP can be digitally signed and verified by both parties. This provides high assurance that each party is communicating with the correct counterpart and not an imposter. In addition, the assertion from the IdP may be encrypted, so that HTTPS is not the only protection against attackers accessing users’ data. To add further security, signing and encryption keys may be rotated regularly. To take OIDC to the same level of security requires extra cryptographic keys, as in Open Banking extensions, and this can be relatively onerous to set up and maintain. However, OIDC benefits from the use of JSON and the simpler use by mobile apps, compared to SAML. ... Here, the preference will be for OIDC, as it is likely that a variety of devices, some not browser-based, might be involved, which normally rules out SAML. The built-in consent associated with OIDC enhances the privacy aspects of the data sharing. In addition, the use of signing and encryption may be used to strengthen the security aspects to a degree that adequately meets the requirements of handling such data.



Predictions for AI and ML in 2020

Predictions for AI and ML in 2020 image
The digital skills gap present within workforces has meant that employees are unsure about how to unleash AI’s full potential. But according to SnapLogic CTO, Craig Stewart, this problem could take a step towards being solved next year. “Transparency remains a hot topic and will continue into 2020 as companies aim to ensure transparency, visibility, and trust of AI and AI-assisted decisions,” said Stewart. “We’ll see further development and expansion of the ‘explainable AI movement,’ and efforts like it. ... Even though there are aforementioned worries regarding AI and ML possibly replacing human workers, some experts in digital innovation believe that the gradual inclusion of the technology will end up being a much more collaborative process. “Despite fears that it will replace human employees, in 2020 AI and machine learning will increasingly be used to aid and augment them,” said Felix Gerdes, Insight UK‘s director of digital innovation services. “For instance, customer service workers need to be certain they are giving customers the right advice.


The Future of Spring Cloud's Hystrix Project


Spring Cloud Hystrix Project was built as a wrapper on top of the Netflix Hystrix library. Since then, It has been adopted by many enterprises and developers to implement the Circuit Breaker pattern. In November 2018 when Netflix announced that they are putting this project into maintenance mode, it prompted Spring Cloud to announce the same. Since then, no further enhancements are happening in this Netflix library. In SpringOne 2019, Spring announced that Hystrix Dashboard will be removed from Spring Cloud 3.1 version which makes it officially dead. As the Circuit Breaker pattern has been advertised so heavily, many developers have either used it or want to use it, and now need a replacement. Resilience4j has been introduced to fulfill this gap and provide a migration path for Hystrix users. Resilience4j has been inspired by Netflix Hystrix but is designed for Java 8 and functional programming. It is lightweight compared to Hystrix as it has the Vavr library as its only dependency. Netflix Hystrix, by contrast, has a dependency on Archaius which has several other external library dependencies such as Guava and Apache Commons.


Dubai’s Kentech kicks off digital transformation drive


“Kentech has suffered with poor IT adoption partnerships in the past, so we needed something that was world-class. We wanted something that our business would love and use.” Kentech launched its tendering process early this year and by July it had selected Oracle as its cloud partner. “During the tendering process, we found that our business was closely aligned to construction,” said O’Gara. “Some of our requirements were quite complex, especially when dealing with reimbursable and fixed-price work – they can chop and change on a daily basis. We found that Oracle could meet those complex requirements. “For us, it was the ERP and budgeting models that were the differentiator. We’ve now started implementation and we’re going to go live at the end of this year with the first phase. We’re a project-based business, so we need to be able to scale up and down very quickly. The cloud model suits us perfectly as a business because we can be flexible, rather than going all out and saying ‘I need 10 more servers’.”


Hybrid multi-cloud a must for banks

Banks operating under a hybrid multi-cloud model predictably and optimally manage finances as cost models shift from fixed to variable. Storing data on site with traditional facilities is expensive and holds banks in long-term contracts for a set amount of data storage. Banks over-resource infrastructure and storage leading to payment of unnecessary resources. Hybrid cloud models allow banks to scale as needed, purchasing only what is immediately utilised using a subscription-based model offered by most CSPs. Procurement and implementation in the traditional way is slow and thus capacity management and a degree of guessing are used, resulting in over-capitalised systems offering little ROI. As the cloud allows for scaling on a pay-as-you-go model, the spend is greatly optimised. For example, UBS’s risk management platform is powered by Microsoft Azure, saving the financial service company 40 percent on infrastructure costs, increasing calculation times by 100 percent, and gaining near infinite scale.


The 10 Best Examples Of How Companies Use Artificial Intelligence In Practice

The 10 Best Examples Of How Companies Use Artificial Intelligence In Practice
Today, Waymo wants to bring self-driving technology to the world to not only to move people around, but to reduce the number of crashes. Its autonomous vehicles are currently shuttling riders around California in self-driving taxis. Right now, the company can’t charge a fare and a human driver still sits behind the wheel during the pilot program. Google signaled its commitment to deep learning when it acquired DeepMind. Not only did the system learn how to play 49 different Atari games, the AlphaGo program was the first to beat a professional player at the game of Go. Another AI innovation from Google is Google Duplex. Using natural language processing, an AI voice interface can make phone calls and schedule appointments on your behalf. ... Another innovative way Amazon uses artificial intelligence is to ship things to you before you even think about buying it. They collect a lot of data about each person’s buying habits and have such confidence in how the data they collect helps them recommend items to its customers and now predict what they need even before they need it by using predictive analytics.


Verizon kills email accounts of archivists trying to save Yahoo Groups history

According to the Archive Team: "As of 2019-10-16 the directory lists 5,619,351 groups. 2,752,112 of them have been discovered. 1,483,853 (54%) have public message archives with an estimated number of 2.1 billion messages (1,389 messages per group on average so far). 1.8 billion messages (86%) have been archived as of 2018-10-28." Verizon has issued a statement to the group supporting the Archive Team, telling concerned archivists that "the resources needed to maintain historical content from Yahoo Groups pages is cost-prohibitive, as they're largely unused". The telecoms giant also said the people booted from the service had violated its terms of service and suggested the number of users affected was small. "Regarding the 128 people who joined Yahoo Groups with the goal to archive them – are those people from Archiveteam.org? If so, their actions violated our Terms of Service. Because of this violation, we are unable reauthorize them," Verizon said. 



Open source refers to an online project that is publicly accessible for anyone to modify and share, as long as they provide attribution to the original developer, reported TechRepublic contributor Jack Wallen in What is open source?. Since its release over 20 years ago, open source has changed the internet. Without open source, the online experience would be "a far different place; much more limited, expensive, less robust, less feature-driven and less scalable. Big name companies would be much less powerful and successful as well in the absence of open source software," wrote Scott Matteson in How to decide if open source or proprietary software solutions are best for your business. ... Major tech companies have set their sights on open source development, with Microsoft's acquisition of GitHub and IBM's acquisition of Red Hat. However, developers are concerned about the impact these tech giants could have on the open source community, the report found.  Nearly 41% of respondents said they were concerned about the level of involvement from major tech players in open source. The main concerns they cited involved possible self-serving intentions from big companies, the use of restrictive licenses that give large organizations unfair competitive advantage, and overall trust of large corporations, the report found.


Is cloud migration iterative or waterfall?

Is cloud migration iterative or waterfall?
Cloud migration projects have two dimensions. First, they are short-term sprints where a project team migrates a handful of application workloads and data stores to a single or multicloud. They act independently, with little architectural oversite or governance, and last between two to six months. Second, is the longer-term architecture including security, governance, management, and monitoring. This may be directed by a cloud business office, the office of the CTO, or a master cloud architect. This set of processes goes on continuously. Here is the problem. The former seems to overshadow the latter, meaning that we’re moving to the cloud using ad hoc and decoupled sprints, all with little regard for common security and governance layers and any sort of management and monitoring. The result is something we’ve talked about here before: complexity. Although we built something that seems to work, applications migrated from one platform to another are deployed with different technology stacks.



Quote for the day:


"Without growth, organizations struggle to add talented people. Without talented people, organizations struggle to grow." -- Ray Attiyah


Daily Tech Digest - December 09, 2019

The PC was supposed to die a decade ago. Instead, this happened


Not all that long ago, tech pundits were convinced that by 2020 the personal computer as we know it would be extinct. You can even mark the date and time of the PC's death: January 27, 2010, at 10:00 A.M. Pacific Time, when Steve Jobs stepped onto a San Francisco stage to unveil the iPad. The precise moment was documented by noted Big Thinker Nicholas Carr in The New Republic with this memorable headline: "The PC Officially Died Today." ... And so, here we are, a full decade after the PC's untimely death, and the industry is still selling more than a quarter-billion-with-a-B personal computers every year. Which is pretty good for an industry that has been living on borrowed time for ten years. Maybe the reason the PC industry hasn't suffered a mass extinction event yet is because they adapted, and because those competing platforms weren't able to take over every PC-centric task. So what's different as we approach 2020? To get a proper before-and-after picture, I climbed into the Wayback Machine and traveled back to 2010.


Netflix open sources data science management tool

Netflix open sources data science management tool
Netflix has open sourced Metaflow, an internally developed tool for building and managing Python-based data science projects. Metaflow addresses the entire data science workflow, from prototype to model deployment, and provides built-in integrations to AWS cloud services.  Machine learning and data science projects need mechanisms to track the development of the code, data, and models. Doing all of that manually is error-prone, and tools for source code management, like Git, aren’t well-suited to all of these tasks. Metaflow provides Python APIs to the entire stack of technologies in a data science workflow, from access to the data through compute resources, versioning, model training, scheduling, and model deployment. ... Metaflow does not favor any particular machine learning framework or data science library. Metaflow projects are just Python code, with each step of a project’s data flow represented by common Python programming idioms. Each time a Metaflow project runs, the data it generates is given a unique ID. This lets you access every run—and every step of that run—by referring to its ID or user-assigned metadata.



AppSec in the Age of DevSecOps

laptop-in-dark
Application security as a practice is dynamic. No two applications are the same, even if they belong in the same market domain, presumably operating on identical business use-cases. Some (of the many) factors that cause this variance include technology stack of choice, programming style of developers, a culture of the product engineering team, priority of the business, platforms used, etc. This consequentially results in a wide spectrum of unique customer needs. Take penetration testing as an example. This is a practice area that is presumably well-entrenched, both as a need and as an offering in the application security market. However, in today's age, even a singular requirement such as this could make or break an initial conversation. While, for one prospect, the need could be to conduct the test from a compliance (only) perspective, another's need could stem from a proactive software security initiative. There are many others who have internal assessment teams and often look outside for a third-party view.


Data centers in 2020: Automation, cheaper memory

prediction predict the future crystal ball hand holding crystal ball by arthur ogleznev via unsplash
Storage-class memory is memory that goes in a DRAM slot and can function like DRAM but can also function like an SSD. It has near-DRAM-like speed but has storage capabilities, too, effectively turning it into a cache for SSD. Intel and Micron were working on SCM together but parted company. Intel released its SCM product, Optane, in May, and Micron came to market in October with QuantX. South Korean memory giant SK Hynix is also working on a SCM product that’s different from the 3D XPoint technology Micron and Intel use as well. ... Remember when everyone was looking forward to shutting down their data centers entirely and moving to the cloud? So much for that idea. IDC’s latest CloudPulse survey suggests that 85% of enterprises plan to move workload from public to private environments over the next year. And a recent survey by Nutanix found 73% of respondents reported that they are moving some applications off the public cloud and back on-prem. Security was cited as the primary reason. And since it’s doubtful security will ever be good enough for some companies and some data, it seems the mad rush to the cloud will likely slow a little as people become more picky about what they put in the cloud and what they keep behind their firewall.


Batch Goes Out the Window: The Dawn of Data Orchestration

Data.orchestration
Add to the mix the whole world of streaming data. By open-sourcing Kafka to the Apache Foundation, LinkedIn let loose the gushing waters of data streams. These high-speed freeways of data largely circumvent traditional data management tooling, which can't stand the pressure. Doing the math, we see a vastly different scenario for today's data, as compared to only a few years ago. Companies have gone from relying on five to 10 source systems for an enterprise data warehouse to now embracing dozens or more systems across various analytical platforms. Meanwhile, the appetite for insights is greater than ever, as is the desire to dynamically link analytical systems with operational ones. The end result is a tremendous amount of energy focused on the need for ... meaningful data orchestration. For performance, governance, quality and a vast array of business needs, data orchestration is taking shape right now out of sheer necessity. The old highways for data have become too clogged and cannot support the necessary traffic. A whole new system is required. To wit, there are several software companies focused intently on solving this big problem. Here are just a few of the innovative firms that are shaping the data orchestration space.


jobs.jpg
With the majority of companies looking for expertise in the three- to 10-year range, Robinson said they must change their traditional recruitment/training tactics. "The technical skill supply is far less than the demand, so companies are not going to simply be able to meet their exact needs on the open market,'' he said. "There must be a willingness to look outside the normal sources for technical skill, and there must be a willingness to invest in training to get workers up to speed once they are in house." The trend is toward specialization, "but this certainly introduces a financial challenge,'' he said, since most companies cannot afford to build large teams of specialists. So depending on the company's strategy, they may lean more on generalists or they may explore different mixes of internal/external talent. "Even for tech workers who specialize, knowledge across the different areas of IT is necessary for efficient operation of complex systems,'' Robinson said. The primary approach most tech workers are taking for career growth is to deepen their skills in their area of expertise, he said. But they must have knowledge in other areas beyond this, Robinson stressed, especially as tech workers move from a junior level to an architect level.


Seagate doubles HDD performance with multi-actuator technology

big data / data center / server racks / storage / binary code / analytics
The technology is pretty straightforward. Say you have four platters in a disk drive. The actuator controls the drive heads and moves them all in unison over all four platters. Seagate's multi-actuator makes two independent actuators out of one, so in a six-platter drive, the two actuators cover three platters each. ... While SSDs have buried HDDs in terms of performance, they simply can’t match HDDs for capacity. Of course there are multi-terabyte SSDs available, but they cost many times more than the 12TB/14TB HDD drives that Seagate and its chief competitor Western Digital offer. And data centers are not about to go all-SSD yet, if ever. So there's definitely a place for faster HDDs in the data center. Microsoft has been testing Exos 2X14 enterprise hard drives with MACH.2 technology to see if it can maintain the IOPS required for some of Microsoft’s cloud services, including Azure and the Microsoft Exchange Online email service, while increasing available storage capacity per data-center slot.


Synchronizing Cache with the Database using NCache

Caching improves the performance of web applications by reducing resource consumption in applications. It achieves this by storing page output or relatively stale application data across the HTTP requests. Caching makes your site run faster and provide better end-user experience. You can take advantage of caching to reduce the consumption of server resources by reducing the server and database hits. The cache object in ASP.NET can be used to store application data and reduce the expensive server (database server, etc.) hits. As a result, your web page is rendered faster. When you are caching application data, you would typically have a copy of the data in the cache that also resides in the database. Now this duplication of data (both in the database and in the cache) introduces data consistency issues. The data in the cache must be in sync with the data in the database. You should know how data in the cache can be invalidated and removed when any change occurs in the database in real-time.


Coders are the new superheroes of natural disasters

screen-shot-2019-12-02-at-5-33-21-pm.png
It will be a launching point for open-source programs like Call for Code and "Clinton Global Initiative University" and will support the entire process of creating solutions for those most in need. Call for Code is seeking solutions for this year's challenge and coders can go to the 2019 Challenge Experience to join. Call for Code unites developers and data scientists around the world to create sustainable, scalable, and live-saving open source technologies via the power of Cloud, AI, blockchain and IoT tech. Clinton Global Initiative University partners with IBM and commits to inspiring university students to harness modern, emerging and open-source technologies to develop solutions for disaster response and resilience challenges. "Technology skills are increasingly valuable," Krook said, "even for students who don't intend to become professional software developers. For computer science students, putting the end user first, and empathizing with how they hope to use technology to solve their problems—particularly those that represent a danger to their health and well-being—will help them understand how to build high-quality and well-designed software."



There is a widespread belief that rules, structure and processes inhibit freedom and that organizations that want to build a culture of autonomy and performance need to avoid them like the plague. ... There are times in history when this has happened to entire societies. When the leaders of the French Revolution abolished the laws of the "Ancien Regime", the result was terror. When Russia descended into chaos after the revolution of 1917, the result was civil war and the emergence of a tyrant, Stalin, who began a sustained terror of his own. When the Weimar Republic in Germany failed in the 1920’s, the result was Hitler. In our own time, as social structures weaken, strongmen like Putin or Erdogan come to power and impose personal rules of their own. Societies which abolish laws become chaotic. In chaos, there is absolute freedom. As the philosopher Hegel observed, absolute freedom is not freedom at all, but a playground for the arbitrary exercise of power, which ends in terror. In terror, only a few are free, and many are slaves.



Quote for the day:


"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche