Daily Tech Digest - May 23, 2018

12 Frequently Asked Questions on Deep Learning


Feature engineering is a process of putting domain knowledge into the creation of feature extractors to reduce the complexity of the data and make patterns more visible to learning algorithms to work. This process is difficult and expensive in terms of time and expertise. In Machine learning, most of the applied features need to be identified by an expert and then hand-coded as per the domain and data type. For example, features can be pixel values, shape, textures, position and orientation. The performance of most of the Machine Learning algorithm depends on how accurately the features are identified and extracted. Deep learning algorithms try to learn high-level features from data. This is a very distinctive part of Deep Learning and a major step ahead of traditional Machine Learning. Therefore, deep learning reduces the task of developing new feature extractor for every problem. Like, Convolutional NN will try to learn low-level features such as edges and lines in early layers then parts of faces of people and then high-level representation of a face.



No CS degree? For skilled developers, 75% of hiring managers don't care

istock-849858410.jpg
Strong work experience is the most important qualification that recruiters and hiring managers look for when filling tech positions, the report found. However, resume-bolstering factors like degree, prestige, and skill keywords are not accurate predictors of future job success, according to the report. Instead, hiring managers and recruiters are looking to indicators that demonstrate ability, such as previous work experience, years of work, and personal projects, which get closer at measuring a candidate's skills. ... Hiring managers' top three measures of success in recruiting were quality of candidate, future performance success, and employee retention, the report found. Failing to align on skills and expectations for candidates are two of the top hurdles facing hiring managers when it comes to working with recruiters, the report found. To solve this problem, recruiters should regularly check in with hiring managers to understand the nuances of the technical skills hiring managers are looking for in each open role. For example, what are the crucial must-have skills for a fullstack developer versus a back-end developer? This can help narrow down the pool of qualified candidates.


Doctors are using AI to see how diseases change our cells


This model can predict where these organelles will be found in any new cell, so long as it’s provided with an image from a microscope. The researchers also used AI to create a probabilistic model that takes its best guess at where one might expect to find those same organelles if provided with a cell’s size and shape, along with the location of its nucleus. These models are useful for doctors and scientists because they provide a close-up look at the effects of cancer and other diseases on individual cells. By feeding the AI with data and images of cancer cells, they can get a more complete picture of how the cell, and its individual components, are affected. And that can indicate how doctors can help each patient with treatment tailored to their disease. The team from the Allen Institute hopes their tools can help democratize medical research, improving healthcare in underserved areas. So the researchers are working to improve them, creating more complete models, according to NPR. They hope to have a broader database, full of models of more cells, available over the next few months.


Everything you need to know about the new general data protection regulations

istock-gdpr-concept-image.jpg
GDPR applies to any organisation operating within the EU, as well as any organisations outside of the EU which offer goods or services to customers or businesses in the EU. That ultimately means that almost every major corporation in the world will need to be ready when GDPR comes into effect, and must start working on their GDPR compliance strategy. There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation. A controller is "person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of processing of personal data", while the processor is "person, public authority, agency or other body which processes personal data on behalf of the controller". If you are currently subject to the UK's Data Protection Act, for example, it's likely you will have to look at GDPR compliance too.


You’ve probably been hiring the wrong kind of data scientist

A lot of people like to call themselves data scientists because they’re using point-and-click tools, like Tableau and Excel, to perform data analysis and visualization in order to gain business insights. ... The real challenge comes from handling large datasets, including textual or other unstructured raw data, and doing so in real time–all of which requires programmatic execution. That is, coding. Indeed, many of the gains in AI and data science are thanks to what researchers are calling the “Unreasonable Effectiveness of Data”–being able to learn programmatically from astronomical data sets. This work is also highly nuanced and detailed, and doing the wrangling and cleaning properly is crucial for developing effective machine intelligence later on. Point-and-click software just isn’t sophisticated enough to substitute for good programming skills (after all, you can perform machine learning with Excel). This goes beyond just the usual mantra of “garbage in, garbage out.” Employers are trying to manage turbocharged public relations on social media while staying in regulators’ good graces despite that enhanced scrutiny.


A Simple and Scalable Analytics Pipeline


The core piece of technology I’m using to implement this data pipeline is Google’s DataFlow, which is now integrated with the Apache Beam library. DataFlow tasks define a graph of operations to perform on a collection of events, which can be streaming data sources. This post presents a DataFlow task implemented in Java that streams tracking events from a PubSub topic to a data lake and to BigQuery. An introduction to DataFlow and it’s concepts is available in Google’s documentation. While DataFlow tasks are portable, since they are now based on Apache Beam, this post focuses on how to use DataFlow in conjunction with additional managed services on GCP to build a simple, serverless, and scalable data pipeline. The data pipeline that performs all of this functionality is relatively simple. The pipeline reads messages from PubSub and then transforms the events for persistence: the BigQuery portion of the pipeline converts messages to TableRow objects and streams directly to BigQuery, while the AVRO portion of the pipeline batches events into discrete windows and then saves the events to Google Storage.


7 risk mitigation strategies for the cloud

7 risk mitigation strategies for the cloud
“Cloud services often encourage ‘casual use’ of data; I can collect, search and store anything just about anywhere” is the hook, says John Hodges, vice president of product strategy for AvePoint. “We often see this in systems like Box, DropBox or OneDrive, where there is a real mixed-use danger in how content is stored and shared.” The simple solution? Prohibit services where mixed-use is likely to be a problem. ... Zero trust is an IT security strategy wherein an organization requires every user, system or device inside or outside its perimeter to be verified and validated before connecting to its systems. How can you use a zero trust model to mitigate cloud risk? For Insurity, an organization that specializes in property and casualty insurance services and software, a zero trust approach means restricting access tightly. “We provide logical access to the minimum set of users with a minimum set of rights and privileges in line with job function requirements. This control is audited internally by our Enterprise Security team and externally as part of our annual SOC audit,” says Jonathan Victor, CIO of Insurity. Regularly examine user access levels and ask yourself whether they make sense.


What Is Microservices? An Introduction to Microservice Architecture


Now, let us look at a use-case to get a better understanding of microservices. Let's take a classic use case of a shopping cart application. When you open a shopping cart application, all you see is just a website. But, behind the scenes, the shopping cart application has a service for accepting payments, a service for customer services and so on. Assume that developers of this application have created it in a monolithic framework. So, all the features are put together in a single code base and are under a single underlying database. Now, let's suppose that there is a new brand coming up in the market and developers want to put all the details of the upcoming brand in this application. Then, they not only have to rework the service for new labels, but they also have to reframe the complete system and deploy it accordingly. To avoid such challenges developers of this application decided to shift their application from a monolithic architecture to microservices.


Cybercriminals Battle Against Banks' Incident Response

Persistent attackers aren't backing down when banks detect them and launch their incident response processes, either. One in four bank CISOs in the Carbon Black study say their institution faced attackers fighting back when they got spotted, trying to deter defenses and the investigation into the attack. "They are leaving wipers or destructive malware to inhibit [IR], deleting logs, and inhibiting the capacity of forensics tools," for example, says Tom Kellermann, chief cybersecurity officer at Carbon Black. "Sometimes they are using DDoS to create smokescreens during events." These counter-IR activities are forcing banks to be be more proactive and aggressive as well, he says. "They need to have threat hunting teams. You can't just rely on telemetry and alerts." While banks are often relying on their IR playbooks, attackers have the freedom to freelance and counter IR activities. They're changing their malware code on the fly when it gets detected, deleting activity logs to hide their tracks, and even targeting bank security analysts and engineers to help their cause.


Certain types of content make for irresistible phishes

It used to be that fear, urgency and curiosity were the top emotional motivators behind successful phishes. Now they’ve been replaced by entertainment, social media and reward/recognition. According to the company, simulated eCards, internal promotion/reward programs, and a number of financial and compliance scenarios (e.g., phishes with “Financial Information Review” or “Compliance Training” in the subject line) are most successful at getting users to click. Employees should be trained to be aware of their emotional reactions to emails and see them as phishing triggers. “When creating simulations, remember consumer scams—those phony Netflix or LinkedIn emails sent to busy employees, who are glad to switch gears and click on something fun,” the company notes. “Understand the dynamics of entertainment or social phishing (think uncritical social acceptance and shortened URLs).” And when it comes to emails promising rewards, employees should be taught to be critical of rewards and deals that sound too good to be true.



Quote for the day:


"You don't have to hold a position in order to be a leader." -- Anthony J. D'Angelo


Daily Tech Digest - May 22, 2018

Smart Homes of Tomorrow – This Is Why We Can’t Have Nice Things


What new design concepts are needed to address these emerging technologies and risks presented by the ever-connected smart home? It not about informing everyone that everything can be hacked. It’s about creating awareness and helping others understand the risks involved with modern technology. It’s about helping the builders and designers understand the technological solutions required by their clients and how to implement them correctly, so they too can educate the user on how to maintain their system safely and securely. Consumers need to be aware of their devices’ remote and local environment, and how their data is collected and stored. They also need to be aware of how their personal devices and appliances can be affected by outages outside of their control, like a DDoS attack on a cloud environment or something as simple as a power outage. Finally, we as a community need to put pressure on the manufacturers to produce secure devices with clear plans on how to patch and mitigate future vulnerabilities. Manufacturers also have to begin working together to insure user data and integrate it in a crowded environment of smart links and physical devices, ultimately preventing remote access.



Legit tools exploited in bank heists


Native tools such as PowerShell and Windows Management Instrumentation (WMI) grant users exceptional rights and privileges to carry out the most basic commands across a network. These “non-malware” or fileless attacks account for more than 50% of successful breaches, the report said, with attackers using existing software, allowed applications and authorised protocols to carry out malicious activities. In this way, attackers are able to gain control of computers without downloading any malicious files and therefore remain unnoticed by malware-detection security systems.  ... Finally, PowerShell is used to connect to a command and control server to download a malicious PowerShell script designed to find sensitive data and send it to the attacker, all without downloading any malware. Almost every Carbon Black customer (97%) was targeted by a non-malware attack during each of the past two years, but the report notes that awareness of malicious usage for tools such as PowerShell has never been higher, with 90% of CISOs reporting seeing an attempted attack using PowerShell.


4 best practices for tapping the potential of prescriptive analytics

One potential value of prescriptive analytics is that you don’t necessarily need a ton of data to reach the best decision or outcome. Prescriptive analytics focuses the question you’re asking, and the decisions you’re trying to reach, to one tangible answer using a smart model of your business that is not dependent on the amount of data (how much or how little) that you have. Predictive techniques and functionalities can be great at identifying a multitude of options through statistical modeling and forecasting, as long as you have the relevant data—but that’s precisely the problem. It’s difficult to process and synthesize numerous options and the nuanced differences among them to determine what you should actually do. How can you be sure that you’re making the best decision? How can you be sure of the impact it will have on your company? Prescriptive analytics can involve hundreds of thousands of tradeoffs associated with a question you might have, and it uses the available data to identify the best decision and impact relative to the goal you’re trying to achieve. 


Technological advancements are changing the landscape for new and existing businesses alike


With Brexit looming over both small and large businesses operating within the UK, maintaining a competitive price point despite new tariffs will be a tough challenge to overcome. However, with new technological solutions opening up greater levels of efficiency and productivity this task should not appear as daunting as many believe. Tech companies will continue to innovate new solutions for both current and future business issues, finding new ways to improve efficiency within the business environment. Voice recognition technology such as iBOB, for instance, is now freeing up valuable time for business owners and receptionists. It is reducing the need for customer service interactions on the phone, such as appointment bookings, which is making the human impact much more impactful and is allowing small businesses to focus their resources on more profitable aspects of their business. Employing affordable technological solutions, with an aim to focus added work hours on tasks more closely related to the bottom line, will allow existing businesses to maintain their competitive position within the market.


Cultural change more important than tech when implementing big data

“It is not your data or my data; it is the firm’s data, and the value you create for the business is from that data,” Tewary explained. “It is a transformation. It’s changing the people culture aspect, so there’s a lot of education. You know, you have to be an evangelist. You wear multiple hats to show people the value.” For Tewary at Verizon, finding advocates within the company for sharing big data was crucial. “We found champions,” he said. “For example, finance … was a good champion for us, where we used the data and analytics to really actually launch some very critical initiatives for the firm — asset-backed securities. … That created the momentum.” Dobrin agreed with this strategy of using champions within an enterprise to help lead the way for the entire company. “It’s not just a jump to the top of the ladder, because there’s just a lot of work that’s required to do it. You can do that within a business unit.” While the whole enterprise doesn’t need to get there all at the same time, as other areas of the enterprise begin to see the use of big data and how it can change the game, they will be open to the idea, Dobrin explained.


What is SDN? How software-defined networking changed everything

180507-06-open-vswitch.jpg
To stay competitive in networking and to avoid being obsoleted by history, network equipment vendors have either blazed the trails for SDN, or found themselves adopting SDN reluctantly, perhaps looking a little singed in the process. One vendor clearly in the former camp, not the latter, is Juniper Networks. It plunged into the SDN field during the fateful year of 2012, first by purchasing a firm called Contrail, and then by building it into an open-source virtual appliance ecosystem unto itself: OpenContrail. As the diagram above depicts, OpenContrail serves as a device that provides the routing logic for distributed operating systems that host Docker containers. ... "It's a big part of operating and automating both a virtual and a physical infrastructure. It orchestrates the VNFs [virtual network functions] and puts together the service chains, all the way to the edge and to the core. Contrail uses vRouter and, in a distributed data center infrastructure, reach into any part of the cloud, string up the required VNFs, stitch together the different pieces of the service, and deliver a custom service to a certain vertical, or a group of end customers. It automates that whole process of customizing the services that can be offered, ultimately, to our service provider customers."


How can you categorize consumers who keep remaking themselves?


Perhaps there won’t be a “mass market” for consumer goods anymore; just a mass of individuals who are increasingly difficult to categorize, and who reinvent themselves from moment to moment, from platform to platform. People will still want to gather in groups with like-minded people. But they will find them through technology and data and connect with them based on their shared values and interests rather than practical connections, such as living in the same area. Rather than being defined by markers such as gender, age or location, they will express themselves in ways that are more fluid and flexible. In one of the future worlds we modeled at our hack week in Berlin, these groups – or “tribes” – broke down physical borders and formed their own communities, both real and virtual. They started to pool their purchasing power and demand a different relationship with brands. Today, consumer-facing companies try to tailor offers and discounts that will appeal to individual consumers, based on their purchasing data – with varying degrees of skill and success. In the future, will products themselves be individualized?


Containers and microservices and serverless, oh my!

Instead of using containers to run applications, serverless computing replaces containers with another abstraction layer. Its functions or back-end services are one-job programs, which use compute resources without worrying the developer. Instead of calling functions in the traditional sense, in serverless, a developer calls a working program to provide a service for the program they're building. The Cloud Native Computing Foundation (CNCF) Serverless Working Group defines serverless computing as "building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.” Or for another definition: "Serverless architectures refer to applications that significantly depend on third-party services,” says Mike Roberts, engineering leader and co-founder of Symphonia, a serverless and cloud architecture consultancy. “By using these ideas, and by moving much behavior to the front end, such architectures remove the need for the traditional 'always on' server system sitting behind an application.”


Websites Still Under Siege After 'Drupalgeddon' Redux

Nearly two months after critical Drupal fixes were released, security firm Malwarebytes says it is still finding dozens of unpatched websites that have been exploited to host cryptocurrency miners or in other cases redirect to malware (see Cryptocurrency Miners Exploit Widespread Drupal Flaw). The problems stem from two critical vulnerabilities in Drupal, both of which are remotely executable. That's a perfect combination for attackers: Give them a widely used piece of software such as Drupal, as well as known vulnerabilities that can be easily and remotely exploited without even needing to attempt to trick a victim into taking any action. The first flaw, CVE-2018-7600, was revealed March 28, and the second, CVE-2018-7602, on April 25. The vulnerabilities were so severe that they were dubbed Drupalgeddon 2 and Drupalgeddon 3. Although patches have been available since the vulnerabilities were publicized, attackers are still taking advantage of websites that haven't been upgraded. "Rolling out a CMS is the easy part," writes Jerome Segura, lead malware intelligence analyst with Malwarebytes, in a blog post. "Maintaining it is where most problems occur due to lack of knowledge, fear of breaking something and, of course, costs."


Effective IoT Security Requires Machine Learning

Artificial intelligence (AI) is a branch of computer science that focuses on the theory and development of computer systems that are capable of performing tasks that normally require human intelligence, such as visual perception and decision-making. Machine Learning is a subset of AI that focuses on the practice of using algorithms to parse data, learn from it, and then make a prediction about something. In contrast to a static algorithm, a critical aspect of machine learning is that the machine is “trained” using large amounts of data and algorithms that give the machine the ability to continually learn how to perform a given task. Tools based on machine learning are necessary to supplement the existing set of security tools. These new tools help organizations identify and mitigate the emerging generation of security breaches that are designed to leverage both the legacy and evolving attack surfaces to evade the enterprise’s traditional defenses. When evaluating security tools based on machine learning, there are three key concepts that IT organizations should keep in mind.



Quote for the day:


"There are plenty of difficult obstacles in your path. Don't allow yourself to become one of them." -- Ralph Marston


Daily Tech Digest - May 21, 2018

riello-industrial-iot-main
Smaller micro datacentres utilising a modular approach are making it possible for data processing facilities to be based either on-site or as close to the location as possible. This edge computing is essential as it gives manufacturers the capabilities to run real-time analytics, rather than vast volumes of data needing to be shipped all the way to the cloud and back for processing. Modular datacentres give operators the scope to ‘pay as you grow’ as and when the time comes for expansion. The rise of modular UPS provides similar benefits in terms of power protection requirements too. And with all the additional revenues a datacentre could make from Industry 4.0, the need for a reliable and robust continuous supply of electricity becomes even more imperative. Transformerless modular UPSs deliver higher power density in less space, run far more efficiently at all power loads so waste less energy, and also don’t need as much energy-intensive air conditioning to keep them cool. Any data centre manager planning to take advantage of manufacturers’ growing data demands would be wise to review their current power protection capabilities.



Angular Application Generator - an Architecture Overview


Most of the time, tools like Angular CLI and Yeoman are helpful if we decide to simplify the source generation by following a very strict standard and pragmatic way to develop an application. In the case of Angular CLI, it can be useful when doing scaffolding. Simultaneously, the Yeoman generator delivers very interesting generators for Angular and brings a great deal of flexibility because you can write your own generator in order to fill your needs. Both Yeoman and Angular CLI enrich the software development through their ecosystems: delivering the bulk of defined templates to build up the initial software skeleton. On the other hand, it could be hard to only use templating. Sometimes standardized rules can be translated into a couple of templates, which would be useful in many scenarios. But that wouldn’t be the case when trying to automate very different forms with many variations, layouts and fields, in which countless combinations of templates could be produced. It would bring only headache and long-term issues, because it reduces the maintainability and incurs technical debt.


Asigra evolves backup/recovery to address security, compliance needs

cloud backup, cloud storage, Asigra
Asigra addresses the Attack-Loop problem by embedding multiple malware detection engines into its backup stream as well as the recovery stream. As the backups happen, these engines are looking for embedded code and use other techniques to catch the malware, quarantine it, and notify the customer to make sure malware isn’t unwittingly being carried over to the backup repository. On the flip side, if the malware did get into the backup repositories at some point in the past, the malware engines conduct an inspection as the data is being restored to prevent re-infection. Asigra also has added the ability for customers to change their backup repository name so that it’s a moving target for viruses that would seek it out to delete the data. In addition, Asigra has implemented multi-factor authentication in order to delete data. An administrator must first authenticate himself to the system to delete data, and even then the data goes into a temporary environment that is time-delayed for the actual permanent deletion. This helps to assure that malware can’t immediately delete the data. These new capabilities make it more difficult for the bad guys to render the data protection solution useless and make it more likely that a customer can recover from an attack and not have to pay the ransom.


Moving away from analogue health in a digital world


Technology has an important role to play in the delivery of world-class healthcare. Secure information about a patient should flow through healthcare systems seamlessly. The quality, cost and availability of healthcare services depends on timely access to secure and accurate information by authorised caregivers. Interoperability cannot be solved by any one organisation in isolation. What is needed is for providers, innovators, payers, governing bodies and standards development organisations to come together to apply innovative and agile solutions to the problems that healthcare presents. We can clearly see the pressure that healthcare organisations are under at the moment. Researchers at the universities of Cambridge, Bristol, and Utah found that a staggering 14 million people in England now have two or more long-term conditions – putting a major strain on the UK’s healthcare services. To add to this, recent figures show that more than half of healthcare professionals believe the NHS’s IT systems are not fit for purpose.


Shadow IT is a Good Thing for IT Organizations

The move to shadow IT is a good thing for IT. Why? It is a wake-up call. It provides a clear message that IT is not meeting the requirements of the business. IT leaders need to rethink how to transform the IT organization to better serve the business and get ahead of the requirements. There is a significant opportunity for IT play a leading role in business today. However, it goes beyond just the nuts and bolts of support and technology. It requires IT to get more involved in understanding how business units operate and proactively seek opportunities to advance their objectives. It requires IT to reach beyond the cultural norms that have been built over the past 10, 20, 30 years. A new type of IT organization is required. A fresh coat of paint won’t cut it. Change is hard, but the opportunities are significant. This is more of a story about moving from a reactive state to a proactive state for IT. It does require a significant change in the way IT operates for many. That includes both internally within the IT organization and externally in the non-IT organizations. The opportunities can radically transform the value IT brings to driving the business forward.


Fintech is disrupting big banks, but here’s what it still needs to learn from them

Fintech is disrupting big banks, but hereĆ¢€™s what it still needs to learn from them
Though it’s definitely possible to grow while managing risk intelligently, it’s also true that pressure to match the “hockey-stick” growth curves of pure tech startups can lead fintechs down a dangerous path. Startups should avoid the example of Renaud Laplanche, former CEO of peer-to-peer lender Lending Club, who was forced to resign in 2016 after selling loans to an investor that violated that investor’s business practices, among other accusations of malfeasance. It’s not just financial risk that they may manage badly: the sexual harassment scandal that recently rocked fintech unicorn SoFi shows that other types of risky behavior can impact bottom lines, too. While it might be common for pure tech startups to ask forgiveness, not permission, when it comes to the tactics they use to expand, fintechs should be aware that they’re playing in a different, more risk-sensitive space. Here again, they can learn from banks — who will also, coincidentally, look for sound risk management practices in all their partners. Since the 2008 crisis, financial institutions have increasingly taken a more holistic approach to risk as the role of the chief risk officer (CRO) has broadened.


The Banking Industry Sorely Underestimates The Impact of Digital Disruption


According to McKinsey, many organizations underestimate the increasing momentum of digitization. This includes the speed of technological changes, resultant behavioral changes and the scale of disruption. “Many companies are still locked into strategy-development processes that churn along on annual cycles,” states McKinsey. “Only 8% of companies surveyed said their current business model would remain economically viable if their industry keeps digitizing at its current course and speed.” Most importantly, McKinsey found that most organizations also underestimate the work that is needed to transform an organization for tomorrow’s reality. Much more than developing a better mobile app, organizations need to transform all components of an organization for a digital universe. If this is not done successfully, an organization risks being either irrelevant to the consumer or non-competitive in the marketplace … or both. Complacency in the banking industry can be partially blamed on the fact that digitization of banking has only just begun to transform the industry. No industry has been transformed entirely, with banking just beginning to realize core changes.


Why tech firms will be regulated like banks in the future


First, we have become addicted to technology. We live our lives staring at our devices rather than talking to each other or watching where we are going. It’s not good for us and, according to Pew Research, is responsible for more and more suicides, particularly amongst the young. Pew analysis found that the generation of teens they call the “iGen” – those born after 1995 – is much more likely to experience mental health issues than their millennial predecessors. Second is privacy. Facebook and other internet giants are abusing our privacy rights in order to generate ad revenues, as demonstrated by Cambridge Analytica, but they’re not the only one. Google, Alibaba, Tencent, Amazon and more are all making bucks from analyzing our digital footprints and we let them because we’re enjoying it, as I blogged about recently. Third is that the power of these firms is too much. When six firms – Google (Alphabet), Amazon, Facebook, Tencent, Alibaba and Baidu – have almost all the information on all the citizens of the world held digitally, it creates a backlash and a fear. I’ve thought about this much and believe that it will drive a new decentralized internet


Ericsson and SoftBank deploy machine learning radio network

"SoftBank was able to automate the process for radio access network design with Ericsson's service. Big data analytics was applied to a cluster of 2,000 radio cells, and data was analysed for the optimal configuration." Head of Managed Services Peter Laurin said Ericsson is investing heavily in machine learning technology for the telco industry, citing strong demand across carriers for automated networks.To support this, Ericsson is running an Artificial Intelligence Accelerator Lab in Japan and Sweden for its Network Design and Optimization teams to develop use cases. In introducing its suite of network services for "massive Internet of Things" (IoT) applications in July last year, Ericsson had also added automated machine learning to its Network Operations Centers in a bid to improve the efficiency and bring down the cost of the management and operation of networks. According to SoftBank's Tokai Network Technology Department radio technology section manager Ryo Manda, SoftBank is now collaborating with Ericsson on rolling out the solution to other regions. "We applied Ericsson's service on dense urban clusters with multi-band complexity in the Tokai region," Manda  said.


Models and Their Interfaces in C# API Design


A real data model is deterministically testable. Which is to say, it is composed only of other deterministically testable data types until you get down to primitives. This necessarily means the data model cannot have any external dependencies at runtime. That last clause is important. If a class is coupled to the DAL at run time, then it isn’t a data model. Even if you “decouple” the class at compile time using an IRepository interface, you haven’t eliminated the runtime issues associated with external dependencies. When considering what is or isn’t a data model, be careful about “live entities”. In order to support lazy loading, entities that come from an ORM often include a reference back to an open database context. This puts us back into the realm of non-deterministic behavior, where in the behavior changes depending on the state of the context and how the object was created. To put it another way, all methods of a data model should be predicable based solely on the values of its properties. ... Parent and child objects often need to communicate with each other. When done incorrectly, this can lead to tightly cross-coupled code that is hard to understand.



Quote for the day:
"Strategy Execution is the responsibility that makes or breaks executives" -- Alan Branche and Sam Bodley