Daily Tech Digest - November 26, 2019

Exploit kits, or EKs, are web-based applications hosted by cyber-criminals. EK operators usually buy web traffic from malvertising campaigns or botnet operators. Traffic from malicious ads or hacked websites is sent to an EK's so-called "gate" where the EK operator selects only users with specific browsers or Adobe Flash versions and redirects these possible targets to a "landing page." Here is where the EK runs an exploit -- hence the name exploit kit -- and uses a browser or Flash vulnerability to plant and execute malware on a user's computer. But in a report released last week, Malwarebytes researchers say EK operators are changing their tactics. Instead of relying on dropping malware on disk and then executing the malware, at least three of the nine currently active EKs are now using fileless attacks. A fileless attack relies on loading the malicious code inside the computer's RAM, without leaving any traces on disk. Fileless malware has been around for more than half a decade, but this is the first time EKs are broadly adopting the technique.


Samsung adds two modems to help enable wider 5G rollout


"Samsung has tapped its leadership in semiconductor and network technology–and combined it with its expertise in 5G research and development–to introduce one of the industry's first SoC 5G New Radio modems: the S8600 and S9100," Johnston wrote. ASICs based System-on-a-Chip (SoC) product designs have become popular because they are more power efficient and have increased operating frequency capabilities, addressing the high-volume, mass production requirements that the industry is now demanding. "These new modems support two architectural options for operators. The S8600 powers Samsung's Digital Unit in separated radio-digital configurations for both 4G and 5G, while the S9100 powers Samsung's 5G integrated Access Unit," he added in his blog post about the new modems. Johnston added that most companies are opting for more power-conscious circuits that are permanent and application-specific, as opposed to circuitry that needs to be programmed or reconfigured. The new Samsung tools will help support 5G networks that are easier to enable, smaller in size and more efficient in how they use power, he said.



The Impact of Cloud Computing on the Insurance Industry

the cloud computing in insurance
Companies that use cloud systems greatly reduce the cost of purchasing hardware and software, thanks to on-demand and pay-per-use optics. They no longer have to buy local servers and data centers, which require specialized personnel to manage and maintain, and which take up physical space and consume electricity 24 hours a day, 7 days a week. And, since most services are provided on-demand, you can have access to abundant computing resources quickly, easily, and with the flexibility your business needs, and without an expensive hardware or software investment. All of this is in favor of optimizing performance and internal processes, also because, by hosting platforms, software, and databases remotely, you’re able to free up memory and computing power on individual machines within the organization. Optimization and efficiency also apply to the production of documents, such as policies, forms, and contracts of various kinds.


T-Mobile data breach affects more than 1 million customers


Few details of the breach have been made public, other than the fact that it was a cyber attack and that approximately 1.5% of T-Mobile’s 75 million customers were affected – about 1.1 million. T-Mobile added that the suspicious activity was initially spotted at the beginning of November, with criminal hackers accessing the information of prepaid wireless account holders. Although the organisation promptly reported the incident to the authorities, it has waited until now to inform customers and the public – presumably to ensure it had all the facts straight. There are few things worse than announcing the details of a data breach only to later find that things are much worse than you initially thought. This happens all too often, with organisations facing an initial backlash, then adding fuel to the fire with more bad news. Because the breach occurred in the US rather than the EU, it isn’t subject to the GDPR (General Data Protection Regulation), which would have required T-Mobile to inform customers within 72 hours of learning about it.


Why the IT4IT™ Standard is Key to Driving Business Value for CIOs


The IT4IT standard provides the CIO with a holistic overview on what his organization is doing well, what needs improvement, as well as highlighting how to improve upon the gaps across the business. Three use cases that support transformation that the IT4IT standard helps accelerate are re-architecting to co-create strategy with the business; rationalizing the application portfolio to reduce waste and free up funds for innovation programs; and driving automation by analysizing and selecting integration points for automation to improve the quality and speed of product and service delivery. The pressure to continually innovate and adopt the most effective solutions is likely to remain in today’s business landscape. But in order to create real value, today’s CIO must not only focus on innovation but on empowering the IT system to work as a competitive driver. They must think holistically and prioritize the management of IT processes to meet the demands of customers, increased competition as well as a changing business climate.


Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity

Software design practices like DDD and EIP have been available since 2003 or so and some teams then had been developing applications as modular services, but traditional infrastructure like heavyweight J2EE application servers for Java applications and IIS for .NET applications didn't help with modular deployments. With the emergence of cloud hosting and especially PaaS offerings like Heroku and Cloud Foundry, the developer community had everything it needed for true modular deployment and scalable business apps. This gave rise to the microservices evolution. Microservices offered the possibility of fine-grained, reusable functional and non-functional services. Microservices became more popular in 2013 - 2014. They are powerful, and enable smaller teams to own the full-cycle development of specific business and technical capabilities. Developers can deploy or upgrade code at any time without adversely impacting the other parts of the systems.


Why your CEO’s personal risk taking matters


People expect CEOs to be risk takers, which makes sense given the nature of the job. That belief may be why corporate boards have been relatively forgiving of the kind of eccentric, grandiose, and sometimes dangerous behavior that the media laps up — and that the public and investors question when it is exposed. After all, it matches the “risk seeker” stereotype. But the #MeToo movement and the occasionally egregious behavior of bubble-economy CEOs suggests that times are changing. Boards and shareholders want to be confident not only that CEOs are comfortable taking business risks, but that they have good judgment about which risks to pursue and when to take a pass. “CEOs meaningfully outscore other executives in embracing risk, while still scoring within an optimal range,” the executive search firm Russell Reynolds concluded in a 2016 study based on an analysis (pdf) of psychometric profiles of more than 6,000 CEOs. The best-in-class CEOs also score high on judgment and low on self-promotion; they project a collected demeanor.


The top technologies that enabled digital transformation this decade


Forrester recently said that enterprises across the world are increasingly turning to automation for a variety of tasks that used to be handled by humans. This is changing the workforce on a fundamental level, prompting fears in the next decade of mass job losses. But the field is also making enterprises better in a variety of concrete ways. Dangerous, time-consuming jobs at factories are increasingly being done by an army of robots, keeping people away from positions that have historically been damaging to their health. This has even bled into other fields like customer service, where many companies now use automated systems to respond to basic questions and complaints from consumers. Part of what's spurring the increase in automation is the advancement of artificial intelligence (AI), which is equipping robots and machines with a wider set of capabilities. Enterprises are using AI for everything from security to human resources, allowing computers to handle tasks that have become costly or redundant. While fears of automation and AI are very real, recent studies have shown that people actually like the introduction of automation and are generally happy computers or robots can handle menial tasks.


State police: We've been testing Spot robot dogs for use in dangerous situations


As per the agreement, MSP's bomb squad wanted to evaluate Spot in "law-enforcement applications, particularly remote inspection of potentially dangerous environments which may contain suspects and ordinances". The loan of Spot was uncovered by the American Civil Liberties Union (ACLU) of Massachusetts, which filed a public records request shortly after discovering a Facebook post by the Massachusetts State Police about an event on July 30 where it would explore the use of robotics in law-enforcement operations. An MSP spokesperson told WBUR that Spot was used as a "mobile remote observation device" that provided police with images of suspicious devices or potentially dangerous situations, such as where an armed suspect might be hiding. "Robot technology is a valuable tool for law enforcement because of its ability to provide situational awareness of potentially dangerous environments," state police spokesman David Procopio wrote. Spot has a 360-degree camera, crash protection, and can work tough environments. It has a top speed of 3mph and can carry a payload of 14kg, or 31lb.


Looking into an intelligent cloud future

Looking into an intelligent cloud future
Self-balancing deployment models. Now we have public clouds, private clouds, traditional on-premises systems, edge-based computing, and more, and all these platforms can run systems and store data. The platforms will have many more capabilities in 10 years, and thus the core question becomes, What do you run, where? Hopefully, we’ll have self-migrating and self-balancing workloads figured out by next decade. Core enabling technology will determine where workloads and data sets should reside and move them there using automated back-end systems. This means that when you deploy an application workload on any type of system, the workload will understand what resources are available to it and self-migrate to the most optimal available platform. Criteria for the platform of choice will include lowest costs, fastest performance, and location closest to the application and data consumers. Punitive security automation. Hackers are getting more creative about how they attack systems in the public clouds. Right now, public cloud security is better than traditional system security, so hackers still focus on traditional systems as easy prey.



Quote for the day:


"Education makes people difficult to drive, but easy to lead; impossible to enslave, but easy to govern." -- Lorn Brougham


Daily Tech Digest - November 25, 2019

Avoiding the pitfalls of operating a honeypot

honey jar dripper
Operators of honeypots sometimes desire to trick the hacker into downloading phone-home and other technologies for purposes of identifying the hacker and/or better tracking their movements. Understand that downloading programming and other technology onto someone’s systems or attempting to access their systems without their knowledge or consent almost certainly violates state and federal anti-hacking laws – even if done in the context of cyber security. Penalties for these activities can be substantial and harsh. Never engage in such activities without the involvement and direction of law enforcement. ... Except for interactions with law enforcement, uses of personally identifiable information should be strictly avoided. Only aggregated or de-identified information should be used, particularly in the context of any published reports or statistics regarding operation of the honeypot. ... The law regarding entrapment is complicated, but if someone creates a situation intended solely to snare a wrongdoer, there is the potential for an argument this constitutes entrapment. In such a case, law enforcement may decline to take action on information gained from the honeypot.


Exploit code published for dangerous Apache Solr remote code execution flaw

Apache Solr
At the time it was reported, the Apache Solr team didn't see the issue as a big deal, and developers thought an attacker could only access (useless) Solr monitoring data, and nothing else. Things turned out to be much worse when, on October 30, a user published proof-of-concept code on GitHub showing how an attacker could abuse the very same issue for "remote code execution" (RCE) attacks. The proof-of-concept code used the exposed 8983 port to enable support for Apache Velocity templates on the Solr server and then used this second feature to upload and run malicious code. A second, more refined proof-of-concept code was published online two days later, making attacks even easier to execute. It was only after the publication of this code that the Solr team realized how dangerous this bug really was. On November 15, they issued an updated security advisory. In its updated alert, the Solr team recommended that Solr admins set the ENABLE_REMOTE_JMX_OPTS option in the solr.in.sh config file to "false" on every Solr node and then restart Solr.



Stateful Serverless: Long-Running Workflows with Durable Functions

There are a few reasons the workload doesn’t appear to be a good fit for Azure Functions at first glance. It runs relatively long (the example was just part of the game; an entire game may take hours or days). In addition, it requires state to keep track of the game in progress. Azure Functions by nature are stateless. They are designed to be quickly run self-contained transactions. Any concept of state must be managed using cache, storage, or database. If only the function could be suspended while waiting for asynchronous actions to complete and maintain its state when resumed. The Durable Task Framework is an open source library that was written to manage state and control flow for long-running workflows. Durable Functions build on the framework to provide the same support for serverless functions. In addition to facilitating potential cost savings for longer running workflows, it opens a new set of patterns and possibilities for serverless applications. To illustrate these patterns, I created the Durable Dungeon. This article is based on a presentation I first gave at NDC Oslo.


The Edge of Test Automation: DevTestOps and DevSecOps

On the edge
DevTestOps allows developers, testers, and operation engineers to work together in a similar environment. Apart from running test cases, DevTestOps also involves writing test scripts, automation, manual, and exploratory testing. In the past few years, DevOps and automation testing strategies have received a lot of appreciation because teams were able to develop and deliver products in the minimum time possible. But, many organizations soon realized that without continuous testing, DevOps provide an incomplete delivery of software that might be full of bugs and issues. And that’s why DevTestOps was introduced. Now, DevTestOps is growing in popularity because it improves the relationship between the team members involved in a software development process. It not only helps in faster delivery of products but also provides high-quality software. And when the software is released, automated test cases are already stored in it for future releases.


Q&A with Tyler Treat on Microservice Observability

A common misstep I see is companies chasing tooling in hopes that it will solve all of their problems. "If we get just one more tool, things will get better." Similarly, seeking a "single pane of glass" is usually a fool’s errand. In reality, what the tools do is provide different lenses through which to view things. The composite of these is what matters, and there isn’t a single tool that solves all problems. But while tools are valuable, they aren’t the end of the story. As with most things, it starts with culture. You have to promote a culture of observability. If teams aren’t treating instrumentation as a first-class concern in their systems, no amount of tooling will help. Worse yet, if teams aren’t actually on-call for the systems they ship to production, there is no incentive for them to instrument at all. This leads to another common mistake, which is organizations simply renaming an Operations team to an Observability team. This is akin to renaming your Ops engineers to DevOps engineers thinking it will flip some switch. 


8 ways to prepare your data center for AI’s power draw

2 data center servers
Existing data centers might be able to handle AI computational workloads but in a reduced fashion, says Steve Conway, senior research vice president for Hyperion Research. Many, if not most, workloads can be operated at half or quarter precision rather than 64-bit double precision. “For some problems, half precision is fine,” Conway says. “Run it at lower resolution, with less data. Or with less science in it.” Double-precision floating point calculations are primarily needed in scientific research, which is often done at the molecular level. Double precision is not typically used in AI training or inference on deep learning models because it is not needed. Even Nvidia advocates for use of single- and half-precision calculations in deep neural networks. AI will be a part of your business but not all, and that should be reflected in your data center. “The new facilities that are being built are contemplating allocating some portion of their facilities to higher power usage,” says Doug Hollidge, a partner with Five 9s Digital, which builds and operates data centers. “You’re not going to put all of your facilities to higher density because there are other apps that have lower draw.”


Kubernetes meets the real world

Kubernetes meets the real world
Kubernetes is enabling enterprises of all sizes to improve their developer velocity, nimbly deploy and scale applications, and modernize their technology stacks. For example, the online retailer Ocado, which has been delivering fresh groceries to UK households since 2000, has built its own technology platform to manage logistics and warehouses. In 2017, the company decided to start migrating its Docker containers to Kubernetes, taking its first application into production in the summer of 2017 on its own private cloud. The big benefits of this shift for Ocado and others have been much quicker time-to-market and more efficient use of computing resources. At the same time, Kubernetes adopters also tend to cite the same drawback: The learning curve is steep, and although the technology makes life easier for developers in the long run, it doesn’t make life less complex. Here are some examples of large global companies running Kubernetes in production, how they got there, and what they have learned along the way.


HP to Xerox: We don't need you, you're a mess


The HP Board of Directors has reviewed and considered your November 21 letter, which has provided no new information beyond your November 5 letter. We reiterate that we reject Xerox's proposal as it significantly undervalues HP. Additionally, it is highly conditional and uncertain. In particular, there continues to be uncertainty regarding Xerox's ability to raise the cash portion of the proposed consideration and concerns regarding the prudence of the resulting outsized debt burden on the value of the combined company's stock even if the financing were obtained. Consequently, your proposal does not constitute a basis for due diligence or negotiation. We believe it is important to emphasize that we are not dependent on a Xerox combination. We have great confidence in our strategy and the numerous opportunities available to HP to drive sustainable long-term value, including the deployment of our strong balance sheet for increased share repurchases of our significantly undervalued stock and for value-creating M&A.


A new era of cyber warfare: Russia’s Sandworm shows “we are all Ukraine” on the internet

Cyber warfare  >  Russian missile launcher / Russian flag / binary code
This was “the kind of destructive act on the power grid we've never seen before, but we've always dreaded.” Even more concerning, “what happens in Ukraine we'll assume will happen to the rest of us too because Russia is using it as a test lab for cyberwar. That cyberwar will sooner or later spill out to the West,” Greenberg said. “When you make predictions like this, you don't really want them to come true.” Sandworm’s adversarial attacks did spill out to the West in its next big attack, the NotPetya malware, which swept across continents in June 2017 causing untold damage in Europe and the United States, but mostly in Ukraine. NotPetya, took down “300 Ukrainian companies and 22 banks, four hospitals that I'm aware of, multiple airports, pretty much every government agency. It was a kind of a carpet bombing of the Ukrainian internet, but it did immediately spread to the rest of the world fulfilling [my] prediction far more quickly than I would have ever wanted it to,” Greenberg said. The enormous financial costs of NotPetya are still unknown, but for companies that have put a price tag on the attack, the figures are staggering. 


Lessons Learned in Performance Testing


To remind ourselves, throughput is basically counting the number of operations done per some period of time (a typical example is operations per second). Latency, also known as response time, is the time from the start of the execution of the operation to receiving the answer. These two basic metrics of system performance are usually connected to each other. In a non-parallel system, latency is actually an inverse of throughput and vice versa. This is very intuitive - if I do 10 operations per second, one operation is (on average) taking 1/10 second. If I do more operations in one second, the single operation has to take less time. Intuitive. However, this intuition can easily break in a parallel system. As an example, just consider adding another request handling thread to the webserver. You’re not shortening the single operation time, hence latency stays (at best) the same, however, you double the throughput. From the example above, it’s clear that throughput and latency are essentially two different metrics of a system. Thus, we have to test them separately.



Quote for the day:


"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis


Daily Tech Digest - November 24, 2019

Could Process Mining Be Bigger Than RPA (Robotic Process Automation)?

science formula and math equation abstract background
“Process mining is an easy idea,” said Rinke. “But it is hard to make it work right for organizations. You need to collect large amounts of data from all sorts of IT systems. You also need to go beyond integrations and must understand the databases that are underneath. And all are customized.” No doubt, a key driver for Celonis has been the rapid growth of RPA (Robotic Process Automation). “In RPA, you'll often get to the first low-hanging opportunities by asking people what routines take up most of their time,” said Antti Karjalainen, who is the CEO of Robocorp. “As companies progress in their automation journey, data-driven technologies become an important part of identifying opportunities. People might not even realize how their own work is related to work done in other areas of the company and process discovery technologies can uncover these hidden workflows.” But the Celonis software is not just for upfront analytics. It is something that is useful for ongoing monitoring to make sure that an RPA implementation is on track.



Designing for Flexibility

You should build the systems around the Processes, not the Organizations. That way, you could change the systems all you want and it wouldn’t affect the Organizations ... or, you could change the Organizations all you want and it wouldn’t affect the systems. That is, there is a “many-to-many” relationship between Process and Organization. (Any one Process may be performed by many Organizations and any one Organization may perform many Processes. Organization and Process are independent variables. Orthogonal.) Apparently this Process to Organization independence is still not very well understood. Within the last two or three years, I heard Steve Towers a notable figure in the Process Management community speaking at a Conference in Bangalore India emphasizing a strong point, “The Process TRANSCENDS the organization!” That is, a Process may have many Organizations involved and conversely, an Organization may be involved in many Processes. That is, once again, there is a many-to-many relationship between Processes and Organizations ... or, they are “independent variables.” Dewey had figured that out sometime before I found him in 1970.


Robotic Process Automation Analytics: KPIs for Your RPA Deployment

Robotic-Proces- Automation-Analytics-KPIs-for-Your-RPA-Deployment-CiGen-RPA-Australia.jpg
This means that scaling is on an ascending trend, probably because the CEOs have started to realise the benefits of enterprise-wide deployment. There can be no denial that the KPIs make a significant contribution to this trend. If this is so, it is legitimate to wonder about ways of optimally setting the KPIs for your RPA deployment. The question is related to the subject of choosing appropriate metrics for a comprehensive assessment of ROI, beyond the financial impact of leveraging RPA in your company. In fact, the need to track various kinds of benefits, some of which are plain to observe and easy to calculate (the reduced costs of implementation, etc.) is a precondition of obtaining accurate estimates of the ROI made possible by automation. Setting your robotic process automation KPIs can thus be seen as a road opener for measuring ROI, which is itself a business metric.



A Leading EA Tool Comes with Extras

All these other competencies that come into play if your business change initiative is to be successful. This second aspect triggered an interesting thought in our minds, which is that an enterprise architecture initiative’s success depends on more than how capable the EA management suite is. Sure, that’s the most visible variable, but it too exists within a landscape of factors, each of which contribute significantly towards the desired outcome. Just think of your vehicle. It may seem like all it needs to run is gasoline but try and drive your family’s car around without changing the coolant, engine oil, or the windshield washer fluid and you’re probably not going to get very far. Therefore, should you find yourself in a situation to procure an EA tool for your organization in the future, remember that while having a mature and competent EA platform is vital, there are other aspects which you should not ignore lest you place a very low ceiling on your transformation initiative. These are the extras that a great EA tool comes with.


Automating our future: an inside look at robotic process automation


Businesses are deploying RPA to efficiently manage large-scale processing in ways that are customizable throughout each individual business. RPA is currently being used across almost all industries and functions, including IT, finance and accounting, human resources and customer service. RPA can be leveraged for an array of tasks – whether it is auto-populating forms or spreadsheets, organizing incoming information or processing transactions. What’s “new” about RPA is that benefits are generated for employees and businesses alike. For example, State Auto, a super-regional insurance holding company headquartered in Columbus, Ohio, uses RPA for back-office tasks. Auditors at State Auto go through thousands of policies to determine recommendations for changing rates. Policies that don’t need to be audited still have to be documented, which requires performing routine data-entry with two separate and unconnected systems, selecting codes and making drop-down selections. With robots in place, this activity happens rapidly and error-free, releasing individuals from time-consuming, and mind-numbing, tasks.


New bypass disclosed in Microsoft PatchGuard (KPP)

patchguard.png
After Windows 10's release in 2015, the most notable of all PatchGuard bypass was GhostHook, discovered by CyberArk researchers in 2017. GhostHook abused the Intel Processor Trace (PT) feature to bypass PatchGuard and patch the kernel. A second bypass was discovered and disclosed over the summer, in July. Found by Nick Peterson, anti-cheat expert at Riot Games, this bypass was named InfinityHook, and abused the NtTraceEvent API to patch the kernel. Describing the bypass at the time, Peterson said "InfinityHook stands to be one of the best tools in the rootkit arsenal over the last decade." Last month, a third PatchGuard bypass was disclosed; this time by Turkish software developer Can Bölük. Named ByePg, this exploit hijacks the HalPrivateDispatchTable to allow a rogue app to patch the kernel. Just like Peterson, when describing ByePg, Bölük used said that the "weaponization potential of [ByePg] is only limited by your creativity."


Human Face to Enterprise Architecture


When performing market analysis and defining the journeys, an important step is to prioritize things accordingly. First step of prioritization is to define the customer personas or segments, that are of value or interest for the business. These may be the key target audience for the product or service, or they may be the most challenging segments (e.g. people likely to churn), so a business initiative can be focused on smoothing the experience for these people and reducing the likelihood of unwanted events. When the key personas or customer segments are defined, the journeys can be prioritized next. The same user can have multiple potential journeys that interact with the business. They can come from a targeted acquisition campaign or discover the service organically. They may have a positive or negative previous experience with this type of services, etc.. Just like it is important to prioritize the customer types, it is important to prioritize the journeys for those customers. Once those decisions are made, it is clear which customer journey the business is working with. This is when an enterprise architecture bit can be added into the picture.


Eliot Bendinelli, a technologist with UK non-profit Privacy International, says the organization wanted data protection agencies to take action because it believed there was a fundamental problem with the tracking industry. Its project began with an investigation into sales in the field of ad tech companies, credit rating agencies, ad blockers, and related organizations, he says. "We were building a case, and basically we think what they're doing is unlawful," Bendinelli continues. While waiting for agencies to act, the research team wanted to find an example of how tracking is taking place on Web pages where people go to read and share sensitive data. "We wanted a concrete example of how tracking is happening on websites where you think you are safe, and where you are looking up or exchanging data that is sensitive and personal," he adds. They chose sites related to mental health because, as Bendinelli puts it, people may research mental health conditions online because they aren't yet ready to discuss it in person.


Why you should care about robotic process automation

RPA article
RPA expects program, system, and even network heterogeneity. RPA evolved to eliminate gaps in workflows or processes that span disparate GUI-based systems. A history lesson might be helpful here. In the 1990s, packaged software suites emerged to displace fit-for-purpose GUI and text-based applications. An insurance company might once have depended on a mix of custom-built and commercial systems to support key processes such as enrollment, billing, claim filing, and claim adjustment; by the late-1990s, however, packaged applications were able to replicate many (if not most) of the features and functions these systems provided. But not all of them. More important, some subset of function-specific systems just couldn’t be replaced. The upshot was that even as enterprises restructured their business processes to accommodate packaged suites, they kept some of these processes (and their supporting IT systems) intact, too. The neat thing about RPA is that its software bots run alongside the GUI-based program(s) on the existing system


Cloud Migration with the Help of EA

Creating a fully-functional, efficient target cloud architecture (complete with any necessary intermediary states) that accounts for the organization’s level of cloud maturity. Developing a coherent, model-based plan for transitioning from the existing systems to a cloud-focused future state is invaluable. Since cloud technology precipitates a decline in the number of both software and hardware components, it produces a change in how the IT stack works, how it’s serviced. It also triggers additional changes on the staff side – the roles that are needed, how they interact with each other now, or if there are any redundancies. Ultimately, you need to develop a new picture of how people, processes, technology and capabilities function in the new cloud paradigm. What better way to get everyone informed, engaged, and feeling in control than by providing them with clear EA deliverables that explain how things are evolving going forward? Enterprise architects have an opportunity here to deliver immense value to a wide array of stakeholders.



Quote for the day:


"Being honest and open is the only way to convince cynical employees that you truly want to establish a partnership with them." -- Florence M. Stone


Daily Tech Digest - November 23, 2019

Cheap 5G phones won't come to the masses until these things happen first


One reason why 5G phones cost so much is that the chips cost more, too. Without a 5G-ready chip that can talk to the carrier network, your phone can never reach those lightning speeds.  Right now, these 5G chips are tailor-made to each carrier's particular wireless spectrum. So even if you buy the Galaxy S10 5G for AT&T, 5G data won't necessarily work on T-Mobile, Verizon or Sprint. Making 5G phones more or less bespoke to each carrier requires extra time and expense to develop, test and deploy. ... Separate 5G chipsets and modems may not be the norm for long. Qualcomm is working on a way to integrate the two into a single unit. The world's largest mobile chipmaker also plans to eventually make 5G available on multiple carrier bands. Both these changes will simplify what it takes to build a 5G phone, which in turn should make them cheaper to make and maintain. Competition will also help lower the price, especially if players like MediaTek, known for undercutting Qualcomm on processors and modems, can target the 5G midrange chipset market abroad. Qualcomm itself is also committed to making a midrange 5G processor for cheaper phones.


5G: A transformation in progress

itu-imt-2020.png
The road to 5G began back in 2015, with the ITU's IMT-2020 framework, which set out the general requirements and future development of the next-generation mobile technology (IMT stands for International Mobile Telecommunications) ... The ITU's broad goal for IMT-2020/5G was to accommodate "new demands, such as more traffic volume, many more devices with diverse service requirements, better quality of user experience (QoE) and better affordability by further reducing costs". The key driver for this effort was the need to "support emerging new use cases, including applications requiring very high data rate communications, a large number of connected devices, and ultra-low latency and high reliability applications" ... According to the GSA's latest (January 2019) figures, eleven operators claim to have launched 5G services (either mobile or FWA): AT&T (USA), Elisa (Finland and Estonia), Etisalat (UAE), Fastweb (Italy), LG Uplus (South Korea), KT (South Korea), Ooredoo (Qatar), SK Telecom (South Korea), TIM (Italy), Verizon (USA), and Vodacom (Lesotho). 


Target Sues Insurer Over 2013 Data Breach Costs
In its lawsuit, Target argues that its general liability policy with ACE covers property damage that includes "loss of tangible property that is not physically injured." This, according to Target's lawsuit, includes the replacement of those payment cards because they were "damaged" by the 2013 and could no longer be used. "ACE has refused to acknowledge coverage for the payment card claims and has further disregarded its contractual obligation to indemnify Target for the settlement payments relating to the payment card claims," according to the lawsuit. "ACE has improperly refused to indemnify Target for settlement payments falling within its aggregate coverage layer." ... A Target spokesperson told Information Security Media Group that the company had been negotiating with ACE for a year over this issue before deciding to file the lawsuit in federal court earlier this month. "We believe the costs are covered within the scope of the insurance policy Target has with ACE and are focused on resolving the outstanding claim," the Target spokesperson says.


Extreme targets data center automation with software, switches

Google Stadia - Data Center
Extreme Fabric Automation is hosted as an application on a guest virtual machine of the two new switches, providing on-premises and private-cloud deployment options, said Dan DeBacker, director of product management at Extreme. “The idea is to remove the need for IT to have to do manual switch-by-switch configurations,” he said. In addition, the software gives IT teams the ability to scale the network up and down to meet changes in demand, and it reduces the cost of operating the network. For those using the guest VM, it eliminates the need for an external server, DeBacker said. The Extreme Fabric Automation package now integrates with orchestration software including OpenStack, VMware vCenter, and Microsoft System Center Virtual Machine Manager (SCVMM). Each integration is a separate microservice and additional integrations will be available in future releases of the software, Extreme said. The orchestration software further automates network configuration, coordination, and management of resources, DeBacker said.


Why SaaS-based AI training will be a game changer

Why SaaS-based AI training will be a game changer
What strikes me about this approach to AI training is that you need a sound training data set. In some cases, it can be obtained from open or proprietary training data brokers. In most instances, you format your own data to train the machine learning model. However, what if other trained machine learning models could train models, anywhere and any time? The idea is not new. Since the advent of AI we’ve toyed with the idea of having one AI engine teach another, either by sharing training data or, better yet, sharing knowledge and experience through direct, automatic interaction. Having one AI engine mentor yours provides outside experience and thus makes the AI model more valuable and effective. This is easier said than done. Machine learning engines typically don’t talk to each other, even if they are the same software. They are designed from the ground up to be stand-alone learners and interact with non-AI systems or humans. However, inter-AI engine training is on most vendor radar screens.


BankThink Charter or not, fintechs are already ‘banking’


Despite these challenges, many fintechs (Varo Money, LendingClub, OnDeck, Robinhood, Square and Revolut, among others) are actively trying to become some type of a bank. The reasons they want to be a “real” bank are obvious. Licensed banks in the U.S. get extremely valuable privileges, including direct access to the payments system, low-cost deposits, stable funding and a national platform to preempt conflicting state laws. This would be especially valuable for fintech lenders and payments innovators. But no one has made it to the goal line yet. What about the contradictory proposition that, today, anyone can be a fintech bank? Just look around. So many fintech and big tech companies have created so-called synthetic banks. These are companies that provide insured checking and savings accounts, payment cards and most of the capabilities of a traditional consumer bank without actually being a licensed bank


Cybersecurity: Are your payments systems fortified against a growing threat?


Lack of adequate defenses against cyberattacks can render all other efforts to maximize working capital moot. For many companies, the loss of working capital, which essentially is a measure of a company’s liquidity and short-term financial picture, could be crippling, or even force a sale. Therefore, it’s vitally important that businessowners understand the nature of the threat companies face in general and work with their bank to implement financial solutions to safeguard their working capital. While breaches of large companies are regularly in the news, those of smaller enterprises don’t typically receive media attention. However, sophisticated criminals are actively infiltrating and stealing large sums of money from companies of all sizes. These attacks are an expensive problem for victimized companies. The average reported cost for a compromise at small and midsized companies was $1.24 million for the fiscal year ended Sept. 30, up 24% from the same period two years ago, according to research firm Ponemon Institute. The average cost for business disruption rose to $1.9 million, up 57%, during the same period.


Sacha Baron Cohen gave the greatest speech on why social networks need to be kept in check


Facebook, YouTube and Google, Twitter and others—they reach billions of people. The algorithms these platforms depend on deliberately amplify the type of content that keeps users engaged—stories that appeal to our baser instincts and that trigger outrage and fear. It's why YouTube recommended videos by the conspiracist Alex Jones billions of times. It's why fake news outperforms real news, because studies show that lies spread faster than truth. And it's no surprise that the greatest propaganda machine in history has spread the oldest conspiracy theory in history—the lie that Jews are somehow dangerous. As one headline put it, "Just Think What Goebbels Could Have Done with Facebook." On the internet, everything can appear equally legitimate. Breitbart resembles the BBC. The fictitious Protocols of the Elders of Zion look as valid as an ADL report. And the rantings of a lunatic seem as credible as the findings of a Nobel Prize winner. We have lost, it seems, a shared sense of the basic facts upon which democracy depends.


Ghost ships, crop circles, and soft gold: A GPS mystery in Shanghai


Nobody knows who is behind this spoofing, or what its ultimate purpose might be. These ships could be unwilling test subjects for a sophisticated electronic warfare system, or collateral damage in a conflict between environmental criminals and the Chinese state that has already claimed dozens of ships and lives. But one thing is for certain: there is an invisible electronic war over the future of navigation in Shanghai, and GPS is losing. ... In fact, something far more dangerous was happening, and the Manukai’s captain was unaware of it. Although the American ship’s GPS signals initially seemed to have just been jammed, both it and its neighbor had also been spoofed—their true position and speed replaced by false coordinates broadcast from the ground. This is serious, as 50% of all casualties at sea are linked to navigational mistakes that cause collisions or groundings. When mariners simply lose a GPS signal, they can fall back on paper charts, radar, and visual navigation. But if a ship’s GPS signal is spoofed, its captain—and any nearby vessels tracking it via AIS— will be told that the ship is somewhere else entirely.


Federal Reserve Report Raises Concerns About 'Stablecoins'  

Federal Reserve Report Raises Concerns About 'Stablecoins'
While the Federal Reserve report acknowledges that stablecoins offer innovation in the global financial payment systems, it notes that without proper regulation and controls, these virtual currencies can lead to financial instability as well as security issues. "The possibility for a stablecoin payment network to quickly achieve global scale introduces important challenges and risks related to financial stability, monetary policy, safeguards against money laundering and terrorist financing, and consumer and investor protection," the report states. And while the Federal Reserve report did not offer specific policy recommendations, James Wester, an analyst at IDC who studies cryptocurrency and blockchain, believes that the central bank decided to address this issue because of Facebook's Libra plans. "What this activity means is that the idea of stablecoins and digital currencies is being looked at seriously and thoughtfully," Wester tells Information Security Media Group.



Quote for the day:


"The leadership team is the most important asset of the company and can be its worst liability." -- Med Jones


Daily Tech Digest - November 22, 2019

What is Neuromorphic Computing? Let’s Dive Deep Into It

Neuromorphic Computing
The concept of neuromorphic computing was spearheaded by Caltech teacher Carver Mead during the 1980s. In any case, neuromorphic figuring (also alluded to as neuromorphic engineering) is still in its nascent stages yet continually evolving, and just over the most recent couple of years, it has become feasible for business use cases. To imitate the human brain and nervous system, researchers and scientists are building artificial neural systems that replace neurotransmitters with nodes. One of the hindrances to these systems is the binary nature held by digital processing. CPUs send messages through circuits that are either on or off; there is no space for degrees of subtlety. Unexpectedly, engineers tackled this issue by returning to simple circuits, or basically analog circuits. Accordingly, they have manufactured processors that can moderate the amount of current flowing between nodes, like the fluctuating electric impulses in the brain that structure and modify brain chemistry.



75% of developers worry about app security, but half lack dedicated security experts on their team


This most recent study found that mobile app security is an ongoing problem. Some 43% of respondents said they still prioritize meeting their app release deadlines over security measures. With pressure to deliver functional apps by certain dates, coders either disregard security or take shortcuts to meet the deadline, the report found. Nearly 60% of developers said they are aware security should be a priority, but the pressure of meeting a deadline prevents them from treating it as such. Because of these pressures, more than half (52%) of respondents said they've experienced burnout. Burnout can be detrimental to an employee's physical and mental health, as well as have a negative impact on job performance, according to the report. "While developers' concerns about securing their code are on an upward trajectory, it's clear the industry has a long way to go. Developers are on the front lines when it comes to protecting their organizations from cyberattacks, and they need the right tools and training to handle this burden," Joseph Feiman, chief strategy officer at WhiteHat Security, said in the press release.


Slack: Microsoft Teams not only copies our product but our ads too


Microsoft this week announced that its answer to messaging platform Slack, Microsoft Teams, now has 20 million daily active users (DAU), making it almost twice the size of Slack. Teams has achieved that in just two years since its launch, thanks in part to being bundled with Office 365. Slack has argued that DAUs don't reflect user engagement and says users from its paid customers spend more than nine hours a day connected to Slack, and more than 90 minutes per day actively using it. Microsoft defines DAUs as "as "the maximum daily users performing an intentional action in the last 28-day period across the desktop client, mobile client and web client". Needless to say, competition is fierce in the enterprise chat space but Teams is an existential threat for Slack, whereas for Microsoft it is just a part of a much bigger pie in Office 365 and Microsoft 365. In a Twitter post titled 'ok boomer' – a reference to younger generations who feel ripped off by the Baby Boomer generation – Slack draws attention to the similarities between its own ad for the Slack Frontiers conference in April and later promotional videos, and Microsoft's 'The Art of Teamwork' ad, which was published in November.


5G security and privacy for smart cities


Connected services and infrastructure is a double-edged sword that helps provide better visibility, efficiency and performance, but is making non-critical infrastructure critical and therefore exposing more of the population to unaffordable risks. The general public is being ‘lulled’ into welcoming the convenience and continuous visibility provided by 5G, though in the event of a disruption, public order could be at stake. The conventional boundaries of critical infrastructure such as water supply, energy grid, and military facilities, and financial institutions will expand much further to other unprecedented areas in a 5G-connected world. All these will require new standards of safety. On the privacy side, matters become more complex. The advent of 5G with its short range will definitely mean more cell communication towers and building antennas being deployed in dense urban centers. With the right toolset, someone could collect and track the precise location of users. Another issue is that 5G service providers will have extensive access to large amounts of data being sent by user devices that could show exactly what is happening inside a user’s home and at the very least describe via metadata their living environment, in-house sensors and parameters.


Does your legal department spark joy?


Historically, when companies wanted to identify deals trends to present to their clients, they deployed teams of junior lawyers to analyze contract databases, a project that could take months. Today, forward-thinking companies with a digital contract repository and basic analytics technology can do a similar exercise in just a few clicks. Digitization and the active life-cycle management of contracts should now be relatively easy tasks to accomplish, not a leap into technology so sophisticated or cutting-edge that expert operators are required. Although many organizations have taken steps to move contracts from filing cabinets into cloud repositories, these actions have often been inconsistently implemented across the company, siloed within individual business units. The legal department, however, intersects with all parts of the business, and thus is in a unique position to oversee contract management. In the life cycle of a typical contract under the old system, the legal team is involved only twice: at the start, in drafting, negotiating, and executing the document; and at the end, in renewal, termination, or management of a dispute.


Should cybersecurity be taught in schools?


It is safe to say that young people are too often unaware of the risks that excessive sharing of photos and posting sensitive information on social media involves, nor do they associate such habits with problems that may ensue, such as grooming, sexting, cyberbullying, and phishing. After all, this is confirmed by findings gathered in a project called “Promoting information security in the school environment” (only available in Spanish) and prepared by the National University of Córdoba, Argentina. As the project’s creators explain, the proliferation of such poor cyber-habits has created the need for parents and educational institutions to actively seek information about privacy and security, notably about various aspects of data protection, cryptography, and prevention from identity and information theft and web-based cyberattacks. Meanwhile, the Computer Emergency Response Team of the National Autonomous University of Mexico (UNAM-CERT) echoes the view in that children and teens don’t have sufficient cybersecurity skills when they complete primary and secondary education. While computing classes do sometimes include aspects of good cyber-hygiene practices, online behavior isn’t thoroughly addressed. 


Pegasus like spyware could be snooping on you right now!!


Until the last incident, Pegasus was gaining entry into a user’s mobile, by tricking the user into clicking a link. The user still had control over whether or not to click the link & prevent Pegasus spyware from getting installed. However, in a bold and game-changing move, Pegasus spyware has now been found to exploit a vulnerability in WhatsApp that doesn’t even require any action from the victim. All that it needs to take over the victim’s phone is just make a missed call on WhatsApp and there’s absolutely nothing the mobile user can do to control this. Sounds scary right!! It is. Typically, in this case, users realized that they had been compromised by Pegasus only when WhatsApp sent them a message on its platform notifying them about the same. There are paid/free applications available on App stores (of respective operating system providers) that claim stellar detection capabilities for this insidious spyware. However, there is no clear indication of the success of their functionality.


New Database For Data Scientists

tiledb
TileDB consists of a new multi-dimensional array data format, a fast, embeddable, open-source C++ storage engine with data science tooling integrations, and a cloud service for easy data management and serverless computations. The developers say traditional databases aren't ideal for data science use as they're not cloud-optimized, while cloud object stores suffer from object immutability, eventual consistency, and IO request limiting. A second problem is that some formats lack sufficient support for efficient data updates. They give the example of updating a Parquet file requiring the creation of a new file, pushing the entire update logic to the user’s higher-level application, and say similar problems arise whenever the update logic is not built into the format and storage engine, but it is rather delegated to higher-level applications. Finally, the developers cite limited scope as a problem, on the basis that most data science applications require at least two separate file formats to handle both array data and dataframes; multi-dimensional arrays for uses such as linear algebra; and dataframes for OLAP operations.


Edge vs. Chrome: Microsoft's Tracking Prevention hits Google the hardest

1024px-google-chrome-icon-september-2014-svg.png
Microsoft has yet to publish formal documentation for this feature. As a result, the implementation has a "black box" feel to it. There's also no obvious way to customize its actions or to replace the built-in lists with third-party alternatives. If you're running the new Edge, you'll find Tracking Prevention on the Edge Settings page, under the Privacy And Services heading. The simple user interface includes an on-off switch for the feature (1), three boxes that define the extent of tracker blocking (2), and a place to manage exceptions (3). By default, Tracking Prevention is turned on, with the Balanced setting selected. According to Microsoft, that setting "blocks potentially harmful trackers and trackers from sites you haven't visited," without breaking functionality in the websites you visit. Bumping that setting up to Strict blocks "the majority of trackers across all sites ... but could cause some websites to not behave as expected." On my Windows 10 test PC, the Trust Protection Lists are located in the current user's profile, at %LocalAppData%\Microsoft\Edge Beta\User Data\Trust Protection Lists\, in a subfolder that identifies the version number of the current lists.


Balancing control and speed when integrating AI

Within the cloud space, AI is being considered for collaboration more and more as the likes of IBM, Amazon and Microsoft delve into this kind of technology. Automated management of hard drive-free data is bound to speed up the process of storage management. Also, AI, with its need for a large amount of processing power, can thrive within the cloud, which is known for its ability to manage large projects with ease. But according to Domo‘s VP of Data and Curiosity, Ben Schein, it’s vital that the agility and speed that AI can provide is balanced with integration and control. To achieve this, Schein said it “comes down to a sense of empathy for the people that have to use intelligence”. He went on to suggest addressing the fear that some employees feel about AI by making it easily accessible and encouraging feedback. “If I’ve been running a store for 20 years, for a retailer, I have a lot of knowledge that’s actually valuable within that setting, and if you’re not setting it up to give that feedback into it, then you’re in trouble,” said Schein.



Quote for the day:


"People will not change their minds but they will make new decisions based upon new information." -- Orrin Woodward


Daily Tech Digest - November 21, 2019

California's IoT Security Law: Why It Matters And The Meaning Of 'Reasonable Cybersecurity'

uncaptioned
According to the law, a reasonable security feature must be “appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure, as specified.” The law is specific about security as it relates to authentication for devices outside a local area network, stating that “the preprogrammed password is unique to each device manufactured” and “the device contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time.” As you can see, guidance included as part of the law is specific to authentication, and it remains vague regarding other reasonable cybersecurity measures that are necessary beyond password management. However, companies can look to prior guidance for clarity, which defines compliance with the 20 security controls in the CIS Critical Security Controls for Effective Cyber Defense as the "floor" for reasonable cybersecurity and data protection.



Serverless HTTP With Durable Functions

Durable functions rely on a main orchestrator function that coordinates the overall workflow. Orchestrator functions must be deterministic and execute code with no side effects so that the orchestration can be replayed to “fast forward” to its current state. Actions with side effects are wrapped in special activity tasks that act as functions with inputs and outputs and manage things like I/O operations. The first time the workflow executes, the activity is called, and the result evaluated. Subsequent replays use the returned value to ensure the deterministic code path. Until the release of version 2.0, this meant interacting with HTTP endpoints required creating special activity tasks. As of 2.0, this is no longer the case! Now, with the introduction of the HTTP Task, it is possible to interact with HTTP endpoints directly from the main orchestration function! The HTTP Task handles most of the interaction for you and returns a simple result. There are some trade-offs.


Google's new AI tool could help decode the mysterious algorithms that decide everything


Users can pull out that score to understand why a given algorithm reached a particular decision. For example, in the case of a model that decides whether or not to approve someone for a loan, Explainable AI will show account balance and credit score as the most decisive data. Introducing the new feature at Google's Next event in London, the CEO of Google Cloud, Thomas Kurian, said: "If you're using AI for credit scoring, you want to be able to understand why the model rejected a particular model and accepted another one." "Explainable AI allows you, as a customer, who is using AI in an enterprise business process, to understand why the AI infrastructure generated a particular outcome," he said. The explaining tool can now be used for machine-learning models hosted on Google's AutoML Tables and Cloud AI Platform Prediction. Google had previously taken steps to make algorithms more transparent. Last year, it launched the What-If Tool for developers to visualize and probe datasets when working on the company's AI platform.


The cybercrime ecosystem: attacking blogs

Thirty-seven percent of the top 40 blogs in Sweden where running an outdated version of WordPress, with the oldest version being from 2012, vulnerable to a lot of exploits—even full remote code execution allowing the attacker to compromise not just the WordPress installation, but the server it is running on, too. When checking the server hosting this extremely old WordPress installation, I found that 13 other websites were running on the same server. Most of the outdated WordPress installations where from 2018. As mentioned before, this is a very common way for cybercriminals to spread malware, but how does it work in real life? After the WordPress site is compromised, the most common technique is to redirect the user to a so-called exploit kit. This is a system which will enumerate the browser, and if a list of requirements is met, deliver the malicious payload to the victim. For example, some of the requirements may be to exploit a certain browser only, if the exploit kit only has exploits for Firefox. In that case, nothing will happen if you visit the website in Chrome or Internet Explorer.


cloud network blockchain bitcoin storage
"These services may be half the price of Amazon S3, but they’re 100 times greater risk given the decentralized nature of the storage and the nascent companies behind them," Bala said via email. "Comparatively, AWS is a trusted provider with 10s of exabytes under management. I am also very skeptical of the performance claims being made relative to S3, particularly when objects need to be rebuilt in case a peer in the storage network disappears." Cloud storage provider Backblaze offers capacity through its B2 service at a quarter the price of Amazon AWS, but without the risk a P2P architecture poses, Bala said. "B2 is built and operated by sophisticated people from a technical perspective with a successful track record. So one need not use a P2P storage service just to save money," Bala said. Bala also criticized P2P-based storage services for claiming to use blockchain's innate cryptography and resilliency when, in fact, the distributed ledger technology is only used for the purposes of payment.


How to Build a Regex Engine in C#

This is an ambitious article. The goal is to walk you through the building of a fully featured regular expression engine and code generator. The code contains a complete and ready to use regular expression engine, with plenty of comments and factoring to help you through the source code. First of all, you might be wondering why we would develop one in the first place. Aside from the joy of learning how regular expressions work under the hood, there's also a gap in the .NET framework's regular expression classes which this project fills nicely. This will be explained in the next section. I've previously written a regular expression engine for C# which was published here, but I did not explain the mechanics of the code. I just went over a few of the basic principles. Here, I aim to drill down into a newer, heavily partitioned library that should demystify the beast enough that you can develop your own or extend it. I didn't skimp on optimizations, despite the added complication in the source. I wanted you to have something you could potentially use "out of the box."


Under the microscope: inbound versus outbound email protection

email security
Times change, technologies continue to evolve, and yet email remains the easiest avenue of attack for cybercriminals looking to hack into your business Need convincing? Well, in 2018 94% of malware attacks were deployed by email, 78% of cyber espionage incidents used phishing, and 32% of all reported breaches involved phishing (let’s not dwell too much on the possible scale of unreported breaches). The truth is that email has been the easiest avenue of attack for at least two decades and, unless there are some fundamental changes in how the problem is addressed at a global level, it will probably remain so for another decade. In the meantime, businesses continue to look for ways of increasing their level of inbound protection – deploying security products that attempt to block access to infected sites or identify unsavoury email content before it reaches the recipient. These products come in many different shapes and sizes and are then augmented by a ‘human shield’, i.e. the vigilance of the employees to spot phishing scams and fraudulent messages that have outwitted the technology.


Q&A on the Book Rebooting AI

There are many legitimate concerns about AI. People with bad intentions - criminals, terrorists, militaries carrying out war, authoritarian governments carrying out surveillance - will undoubtedly misuse it, as they do every powerful technology. People, both in the general public and in positions of authority, are apt to trust it too much. Unless it is audited very carefully, AI can perpetuate existing social biases, as we've seen in many scandals over the last decade, such as the Amazon job recruitment program that was unshakably biased against women applicants.But our largest concern is that the great potential of AI that could benefit mankind will end up unrealized: first, because people will be frightened by the dangers and, after a certain point, discouraged by the limitations and failures of existing AI; and, second, because AI research, fixated on the short-term successes of machine learning, will fail to explore other approaches that have longer-term payoffs but a greater benefit in the long term.


IoT sensors must have two radios for efficiency

Maersk container ship / shipping containers / abstract data
For the Internet of Things to become ubiquitous, many believe that inefficiencies in the powering of sensors and radios has got to be eliminated. Battery chemistry just isn’t good enough, and it’s simply too expensive to continually perform truck-rolls, for example, whenever batteries need changing out. In many cases, solar battery-top-ups aren’t the solution because that, usually-fixed, technology isn’t particularly suited to mobile, or impromptu, ad hoc networks. Consequently, there’s a dash going on to try to find either better chemistries that allow longer battery life or more efficient chips and electronics that just sip electricity. An angle of thought being followed is to wake-up network radios only when they need to transmit a burst of data. Universities say they are making significant progress in this area. “The problem now is that these [existing] devices do not know exactly when to synchronize with the network, so they periodically wake up to do this even when there’s nothing to communicate,” explains Patrick Mercier, a professor of electrical and computer engineering at the University of California, San Diego, in a media release.


Facebook: Microsoft's Visual Studio Code is now our default development platform


While Facebook is making VS Code the default developer environment, Marcey notes that Facebook does not have a "mandated development environment" and that some developers use other IDEs such as Vim and Emacs. Nonetheless, the default status for VS Code means that Facebook is backing it for its development future. "Visual Studio Code is a very popular development tool, with great investment and support from Microsoft and the open-source community," said Marcey. "It runs on macOS, Windows, and Linux, and has a robust and well-defined extension API that enables us to continue building the important capabilities required for the large-scale development that is done at the company. Visual Studio Code is a platform we can safely bet our development platform future." Facebook is also teaming up with Microsoft to improve the remote-desktop experience with VS Code via remote development VS Code extensions. Microsoft in May announced previews of three extensions that enable development in containers, remotely on physical or virtual machines, and with the Windows Subsystem for Linux (WSL).



Quote for the day:


"Leadership cannot just go along to get along. Leadership must meet the moral challenge of the day." -- Jesse Jackson