Oracle has revamped its commercial support program for Java SE (Standard Edition), opting for a subscription model instead of one that has had businesses paying for a one-time perpetual license plus an annual support fee. The subscriptions will be available in July 2018. (Personal, noncommercial usage continues to be free and not require a subsctiptoion.) Called Java SE Subscription, the new program for mission-critical Java deployments provides commercial licensing, with features offered such as the Advanced Java Management Console. Also, Oracle Premier Support is included for current and previous Java SE releases. It is required for Java SE 8, and includes support for Java SE 7. ... The price is $25 per month per processor for servers and cloud instances, with volume discounts available. For PCs, the price starts at $2.50 per month per user, again with volume discounts. One-, two-, and three-year subscriptions are available. Oracle has published the terms of its new Java SE Subscription plans. The previous pricing for the Java SE Advanced program cost $5,000 for a license for each server processor plus a $1,100 annual support fee per server processor, as well as $110 one-time license fee per named user and a $22 annual support fee per named user
Sometimes, it’s just a black box because it’s protected by IP. So, many people will have heard of this model that is used for recidivism predictions. So, this model was created by a company, and the model is a pay-for-use model. And the model is just not something that’s known to us, because we’re not allowed to know. By law, it’s something the company owns, and the courts have, several times, upheld the right of the company to keep this model private. So maybe you’re a person who this model has just predicted you’re a high-risk of committing another crime and because of that, maybe you’re not going to get parole. And you might say, “Hey, I think I have a right to know why this model predicts that I’m high-risk.” And so far, the courts have upheld the right of the company that created the model to keep the model private and not to tell you in detail why you’re being predicted as high or low risk. Now, there are good reasons for this. You don’t necessarily want people to be able to game the model. And in other cases, you really want to protect the company who went to the expense and risk of generating this model. But that’s a very complex question.
Continuous integration was born around the idea that the earlier you find a bug, the cheaper it is to fix. But this priority could become problematic if there is not an easy, fast and reliable way to assess whether changes are ready to be integrated and then ready to go to production. When you adopt continuous testing as a key practice, your code must always be ready for integration, according to Isabel Vilacides, quality engineering manager at CloudBees. "Tests are run during development and on a pull request basis," she explained. "Once it's integrated, it's ready to be delivered to customers." Continuous testing doesn't stop at functional testing; it involves considering nonfunctional aspects, such as performance or security. The process aims to prevent bugs through code analysis, before risks become apparent in production. Continuous testing requires cohesive teams, where quality is everyone's responsibility, instead of separate teams for development, testing and release. The approach also makes automation a priority and shifts quality to the left, making it an earlier step in the pipeline.
In the old days, the CISO, I was told, was just an advisory position. Now, my roles, the roles I've held in the last seven years or so, are much more than advisory. Advisory is part of it for sure, but there's a lot more leadership involved. I see it becoming more and more a position reporting directly to the CEO, a truly C-level position. I see CISOs have vice presidents reporting to them going forward. And I think my job as being increasingly described as chief ethicist, asking: What's the right thing to do, and not just what's the most secure thing to do? What's the proper behavior? What do customers expect from us? If a compromise has to be made, what's the most ethical compromise to make? ... It's important for at least two different reasons. One, from a practical perspective, I've talked a lot about the skills gap. If we're blocking 50% of the planet from joining this career path, we're really contributing to our biggest challenge. Then the other part: Women across the globe are economically oppressed, and information security is a lucrative field. I want to get women into the information security field so they can be financially independent and make a good living.
Sadly, the move from a private cloud to a public cloud is not easy, whether you go hybrid or all-public. The main reason is that there is no direct mapping from private cloud services, which are the basics (storage, compute, identity access management, and database) to public cloud services which have those basic services plus thousands of other higher-end services. Private clouds today are where public clouds were in 2010. Public clouds today are in 2018. You’re in essence migrating over a ten-year technology advance as you move your applications between private and public. Complexity also comes in when you’ve already coupled your applications to the services in the private cloud, which is typically going to be OpenStack. There are very few OpenStack deployments on public clouds, none of which are the Big Three providers (Amazon Web Services, Google Cloud Platform, and Microsoft Azure). That means you can’t do an A-to-A mapping of the cloud services from your private cloud to the public clouds. And that in turn means you need to remap these services to similar services on the public cloud.
As in any game against an adversary, you need both defensive and offensive strategies. An active defense adds the offense-driven actions so that organizations can proactively detect and derail would-be attackers before they have time to get comfortable within the network, stopping attacks early and gathering the threat intelligence required to understand the attack and prevent a similar recurrence. Sometimes active defense includes striking back at an attacker, but this is reserved for military and law enforcement that have the resources and authority to confirm attribution and take appropriate action. An active defense strategy changes the playbook for cybersecurity professionals by combining early detection, substantiated alerts and information sharing to improve incident response and fortify defenses. It is no longer “a nice to have,” but instead is becoming more widely accepted as a “must have” as prevention-only tactics are no longer enough. With well-orchestrated breaches continuously making headlines, an active defense strategy is becoming a priority.
The malware comes equipped with three different layers of evasion techniques which have been described by the researchers at Deep Instinct who uncovered the malware as complex, rare and "never seen in the wild before". Dubbed Mylobot after a researcher's pet dog, the origins of the malware and its delivery method are currently unknown, but it appears to have a connection to Locky ransomware -- one of the most prolific forms of malware during last year. The sophisticated nature of the botnet suggests that those behind it aren't amateurs, with Mylobot incorporating various techniques to avoid detection. They include anti-sandboxing, anti-debugging, encrypted files and reflective EXE, which is the ability to execute EXE files directly from memory without having them on the disk. The technique is not common and was only uncovered in 2016, and makes the malware ever harder to detect and trace. On top of this, Mylobot incorporates a delaying mechanism which waits for two weeks before making contact with the attacker's command and control servers -- another means of avoiding detection.
Web applications running on IIS are easy to test because most code is just HTML, .Net or other Web app that runs on top of the IIS/Web platform. Setting up a Windows Server 2019 server with IIS and then uploading Web code to the server is a quick-and-easy way to confirm that the Web app works and can easily be the first 2019 server added to an environment. Fileservers are also good early targets for migrating old to new. Many times, fileservers have gigabytes or even terabytes of data to copy across, and fileservers are also the things that may not have been upgraded recently. In early-adopter environments, many times the old fileservers are still running Windows Server 2008 (which goes end-of-life in the summer of 2019) and could use an upgrade. File migration tools like Robocopy or a drag-and-drop between Windows Explorer windows can retain tree and file structures as well as retain access permissions as content is copied between servers. Tip: After content is copied across, new servers can be renamed with the old server name, thus minimizing interruption of user access.
Sometimes you will find that they have different mental models for the same business concepts or use the same terms to describe different concepts and if so, it’s an indication that these concepts belong to different bounded contexts. From the beginning Khononov and his team used these discovered boundaries to define services, with each boundary becoming a service. He notes though that these services represent quite wide business areas, sometimes resulting in a bounded context covering multiple business subdomains. As their next step, they instead used these subdomains as boundaries and created one service for each business subdomain. In Khononov’s experience, having a one-to-one relationship between a subdomain and a service is a quite common approach in the DDD community, but they didn’t settle for this, instead they continued and strived for even smaller services. Looking deeper into the subdomains, they found business entities and processes and extracted these into their own services. From the beginning this final approach failed miserably, but Khononov points out that in later projects it has been more successful.
Far too often, information security teams have only the broadest overview of the wider workings of their organisations. Other staff, meanwhile, tend to have little knowledge of or interest in information security practices, which they often believe have been designed to hinder their day-to-day work. However, when any employee with Internet access can jeopardise the entire organisation with a single mouse-click, it should be clear that the responsibility for information security lies with every member of staff and that security practices need to be embedded in the working practices of the whole business. Insider attacks are not limited to the malicious actions of rogue staff. The term also refers to the unwitting behaviour of improperly trained employees, or to the exploitation of inappropriately applied privileges and poor password practices by malicious outsiders. Staff need regular training on information security practices to ensure they’re aware of the risks they face on a daily basis. The vast majority of malware is spread by drive-by downloads and phishing campaigns, both of which exploit human error.
Quote for the day:
"Trust is one of the greatest gifts that can be given and we should take creat care not to abuse it." --Gordon Tredgold