Just got back from a meal with Jim. Jim is a guy I met on the conference bus. He is a similar age to me and with a fairly similar background. Gaming history started on the Atari 2800, had an Amiga, Sega Consoles and then PC gaming, Ultima online, Quake, World of Warcraft etc. He used to be sponsored to play Quake and his team was ranked no 1 in the world. He currently works in a infrastructure team that supports Sony's streaming gaming service based in California. So all in all lots of common ground to talk about and very interesting.
Best of all he reminded me that Double Tree Hotels give out free cookies if you ask so I am happily munching on one of those now as I write this.
Another good day of sessions today at the conference. For a change in the morning I hardly moved, all of the talks were either in the same room or next to each other which has been unheard of until now. I met up with Richard from ISN and we did a few sessions together, apart from bumping into him outside the lunch hall on Monday I hadn't seen him so that was good
Still trying desperately to win an XBOX at the expo and failing but I did win $20 on one of the games. The expo has been really good to talk to all the vendors and Microsoft themselves have a massive presence there with experts from pretty much all of the Microsoft technologies.
There are a lot of people from Europe here more than I remember in the past, one guy I got talking to from Sweden who works for a consulting company said that they had sent 50 people over to the conference.
Tomorrow is the last full day with the party in Universal tomorrow night. I heard it was the first time Universal had closed the park for a private party so that's pretty awesome
So notes from today
Server 2016
Azure File Sync
Centralize file
services in Azure - hub spoke model
Local files servers cache file server
data in multiple locations
Data is cached on the local file servers based on policies that you set. It builds a heat map of most used files.
The files outside of that cache only the metadata is stored so very little footprint on the server
If a user requests a file parts of the file are pulled back so for example if its a zip file and they only open one of the items in the zip file it will only pull the part of the zip file it needs, if its a large video file it will start to stream the video file rather than wait to pull the whole file.
The full data set resides in Azure and can be backed up to Azure
Replication can be set at a folder level and can easily be added to existing files servers don't need to re-engineer
If you loose a file server, stand up a new server with the same disk structure, install the agent and very quickly you will have a fully populated server. It may be slow initially while it caches the files the users are requesting but users will have full access to the entire data set instantly. You don't have to wait for Terabytes of data to be restored.
Downloadable agent installs on 2012 R2 or higher
http://Aka.ms/afs
Windows server 1709
out soon
Nano server was set for
use in the container space but also for hosting but hosting take up has been poor. With the new version of windows they have
resized Nano to focus on containers and server core will focus on hosting
There are two streams to get Windows Server Updates, bi annual and long term. GUI versions of Windows will be restricted to the long term updates which should see updates every 2-3 years.
The Nano server compressed image has been
reduced from a base image of 383mb to 78mb
Containers
Microsoft strongly believe
that we are currently in the same space with containers as we were 10 years ago
with virtualization. Everyone was looking at virtiualiaztion 10 years ago when adoption was minimal and not believing that we would be running mission critical apps on it and now there is more virtual than physical. Containers are here to stay and will be as big as virtualization in 10 years time
Old .net legacy apps
There is a docker convert program a bit like P2V that will convert your .net app
to a container
Free docker tools on
github - once moved to a container it will use a fraction of memory and storage
1 hour to convert an
application - can easily test to see if it works.
Everything that the application needs sits in the container - no pre-reqs needed so complexly portable between servers.
Devs can create an app with all the dependencies with the knowledge it will work as its moved through environments
Project Honolulu
This is server management redesigned to allow you to admin server cores from any device
All the common admin
tools displayed in a web portal
Very good, really
makes core easier to manage and its a FREE download
IIS is not required but WMF 5.1 is required on 2012 or 2012 R2
Honolulu with also
plug into azure services such as azure backup
Software assurance
for windows
Seemed to suggest
that with SA there is a big cost saving benefit of running servers in Azure -
for each server license you can run a server in azure and pay the linux price - the
compute price
For every windows server
that has SA you can use up to two virtual machines and up to 16 cores in Azure
2008 migrations
There may be a number of programs / funding available to help migrate 20008 servers to Azure
MTA
MTA - modernize
traditional apps
Partnership with
Microsoft, docker and avande
https://blog.docker.com/2017/04/modernizing-traditional-applications/
help enterprises make their existing legacy apps more secure, more efficient and portable to hybrid cloud infrastructure
They will
containerise a legacy .net application and within 5 days they will get it
running in Azure
Funding available
for these programs
ROI savings for
modernizing apps
Shielded VMs
Shielded VMs are
coming soon to Azure. This will mean
that encrypted VMs wont be able to be run outside of that infrastructure
SATADOM
Satadom -
tradditional flash memroy connected to a sata connector
Frees up drives slot
128GB storage can boot OS
Windows defender
Advanced Threat Protection
Built into the OS
Cloud based threat
protection
Shows the
breadcrumbs related to attacks
Optimizing Azure for DR
4 layers of DR to
consider
Storage
Hypervisor
Apps
OS
Case study capstone
mini corp
Storsimple - iscsi
storage device
Sits on premises
Every bit of data
written to it is written to Azure
Hot data is cached
locally
Its basically an
Azure File Sync appliance
Hyper V replication
to Azure
Hyper V replica can
replicate to another Hyper V server or Azure using Azure Site Recovery
In Azure you are paying only for
storage until the machine is turned on
For machines not
running on Hyper V - vmware or physical
Azure Site Recovery
can replicate the server to Azure using a Disk Driver
Need to run a
process server on premises
Writes it to a VHD
file in Azure. When you have a disaster
you can create a VM in azure and attach the disks
Failback would have
to be to virtual machine, cant failback to physical
ASR is free for 31
days to allow you to migrate
Application Level
Replications
ASR doesn’t require
compute - Application will
Needs to be good
connectivity to Azure
Can stretch clusters
or availability groups to Azure
For front end
servers such as web servers use VM Scale Sets
Removes the need for
licenses and compute charges
Create a Gold Image
that you create instances from
From this you can
scale based on metrics or schedule
Orchestration
Script DR reovery to
create the machines needed and bring them up in the order that’s required.
Script your whole DR
plan
Azure Traffic
Manager provides failover DNS
Acenture Customer
example
411,00 employees, 10
datacenters, 2 PB of data in One Drive
77% in the cloud,
13,200 virtual machines
Wanted to be able to
test DR without affecting production
Copy of production
environment in Azure
Copied DCs and
servers
Replicated using ASR
Machines have same
names so only need to change IP
At any time they can
bring up DR
Users connect from
the DR test machine
Used to have DR
datacenters
Used to test yearly
People would give up
weekend to test
Very similar to us
currently
Building on Blockchain
This one started to get heavy fast. The eye opener for me was that Blockchains can be used for other things than just currency
Blockchain is a
secure, shared distributed ledger
Data is stored in a
ledger must like a database
Everyone on the
network has a copy of it and everyone can add to it
Everyone has to
agree that an addition is true before it can be committed
Currently traditional transactions are based on trust with third parties
Where trust doesn’t
exist there is a lot of manual checks or 3rd party brokers
Example in a grocery
supply chain
Every item in the
chain farmer, wholesaler, distributer and store has information based on the
product
Food contamination
issue, hard to trace back as everyone will store different bits of information
in different ways and could be succeptable to be changed
With blockchain
everyone woulld share the same infromation
Blockchains were
initially created for currency transactions and never for enterprise databases
Microsoft are
working on a framework to overcome some of the shortcomings that would be
needed to transition it to an enterprise database - look at coco framework
Its not easy to
build an application around blockchain so Microsoft are coming up with a
toolkit to assist with this
Getting started
Does blockchain
apply to my scenario?
What technology
should I build on - what ledger?
How do I translate
workflows into smart contracts?
How do I build a
distributed app?
Etherium - smart
contract - adds decision making logic workflow - so instead of bitcoin where I
want to give xx money, you can specify I want to give xx money on a Monday if
its sunny
Issues
No standard for
smart contract language and everyone is implementing it differently
Workflow Is not
hidden from everyone else in the chain
Workload computation
is restricted currently