Tuesday, November 15, 2016

DDoS Attack (Denial of Service Attack)

Did you know?

Click here to get a visualization or data on what are the major denial of service attacks that are happening right now or in the past.

Click here For the latest news on DDos Attacks

Monday, October 17, 2016

Links of Interests

The C4 software architecture model
Google Maps Geocoding API - Address parsing, etc
F-secure VPN - proxy to any location, great, but not so cost effective
Google DeepMind Forum Post
Google DeepMind article in Nature Magazine
Public Lecture with Google DeepMind's Demis Hassabis
Best practices for deploying passwords and other sensitive data to ASP.NET and Azure App Service
Cloud Design Patterns
Selenium Bootcamp
Have I been pwned? - check if an email address has been breached.
Havij - SQL Injection penetration tool
Showdan.io - search engine for connected devices
Best of Troy Hunt
DevOps sessions from Build

REST
Reading:

Online Training:

Links to how to merge Word Docs (.docx)

Stackoverflow - ChunkAlt

Tuesday, September 27, 2016

Mapping a drive to Windows using Powershell

I recently wanted to test an installer, but it required that I have an E drive (like the server does). I could modify the install configuration, but that requires I make that change each time I want to test the installer. Instead, I want an E drive on my pc. I don't have an extra drive and I didn't want to repartition or resize the partition, etc to create a real drive. I figured out that I can create a drive letter using a mapped drive that points to a path on my local c drive. I need to run the installer in Powershell as administrator so creating a mapped drive through Windows UI won't help me since it is not accessible when running Powershell as Administrator.

The solution to the problem is actually quite easy. The secret is to create the mapped drive before I run the installer (in the Powershell session that is running as Adminstrator) using a Powershell command as shown below.

New-PSDrive –Name “E” –PSProvider FileSystem –Root “\\mylaptop\c$\E_Drive” –Persist

In this example I created a directory called E_Drive on my c drive to act as the E drive. The name doesn't matter.

Now at the Powershell prompt I can access the E drive as if it is an actual drive.

Friday, September 23, 2016

Take aways from Agile on the Beach 2016

I attended Agile on the Beach in Falmouth (England). It was really great to hear so many people with different experiences. The videos and slides can be found here. Below are some of the highlights of what I found particularly interesting.

Keynote by Dr. Linda Rising
  • Surprisingly, people are moved to action by stories NOT facts (or evidence). In fact when someone is shown evidence as to why they are wrong they only stand firmer in their belief (as a defense mechanism).
  • A placebo can be just as useful as the "real thing" because what we BELIEVE can make us fail or succeed. The placebo allows us to believe in something.
Book: Thinking Fast. Slow...The Progress Principle (I think this is the book :)

Continuous Delivery
  • Tools: NCrunch, CruiseControl.Net, R#, NANT
  • Policy: Develop to one trunk (no long standing branches). Because of this, GIT may not be the best choice.
  • Goal: Want SMALL commits frequently. For example, about once an hour and ideally 2 files.
  • Break refactoring out in a separate commit.
  • Policy: Can't commit when build is broken
  • Policy: Run all tests before commit (on local machine)
  • Policy: Don't go home until a broken build is fixed. This doesn't mean late hours necessarily. It could just mean backing out the offending commit, and addressing it the next day, then recommit.
  • Goal: a pipeline should be about 4 minutes for quick feedback.
  • Goal: it is ideal for it to be faster to redeploy a change using the pipeline than manually backing out a release.
  • Goal: 75% code coverage. Most will be between 50% and 80%
  • Use Feature Toggles when required, but avoid if can.
  • Warm up an app after deploy
  • Measure commit to live time.
Quality Control
  • Broken Windows Effect - one broken window (bug, issue, technical debt, not tested unit, etc) begets more broken windows.
  • As developers we spend 70% of our time reading code (ours or someone else's) and the rest of the time copy and pasting. Consider poorly written code to be a broken window and copy and pasting that implementation pattern the creation of more broken windows.
  • Anti-patterns: Fat controllers, large models, functionality grouped into services or managers (not sure I understand the last one...).
  • Enforce standards on each commit and fail the build. Could be formatting rules, naming conventions, etc. This will make code diffs be much easier because we only have to read meaningful changes, not changes to code formats.
  • Code that is easy to change (opposite of code smell) has: High Cohesion, Loosely coupled, little duplication, low cyclomatic complexity.
  • Tools: ESLint (for JavaScript checking), Resharper, Visual Studio, NDepend, NCover
Problem Solving
  • Shorten Feedback loop: Idea -> Test -> Measure -> Learn -> (loop back)
  • Gall's Law: Solving complex problems (or systems) from scratch in one go does NOT work, but starting with simple case that works and iterating works (summarized from quote from John Gall).
Testing with Continuous Delivery
  • Testing Pyramid (GUI testing, Acceptance testing, unit testing)
  • Interestingly, only 20%-25% of the audience in the Testing in CD talk at AOTB used TDD. Maybe it isn't so surprising because people that already know it and use it may not attend a talk on it when there are new topics to be heard.
  • Interestingly, only 10%-15% of same group used BDD. 
  • Hypothesis Driven Development - Can be written as statements in the following format: We believe <this capability>, Will result in <this outcome>, We will have confidence to proceed when <we see measurable signal>.
Pen Testing (Penetration Testing) 
  • Expect 20-30 minutes scan time depending on application size
  • Run on UI and API.
  • Use BDD to describe security tests (see BDD Security) to have human readable tests for security.
  • Tools can find 70-80% of vulnerabilities, but the rest needs to be manually done.
Pen Testing Tools
  • Static code review using tools like Veracode.
  • BDD_Security - Define security tests using BDD style scenarios.
  • Mittn - define a hardening target using human-readable language
  • Arachni-Scanner - free with source and runs on Windows, Mac, and Linux
  • Gauntlt - may be useful as well to coordinate security tools.
  • ZAP - free security tools / scanner.
  • SSLyze - checks for mis-configurations affecting SSL Servers.
  • Nessus - a paid PCI vulnerability scanner.
Business Agility
  • Process: Test Hypothesis -> Quick delivery and release -> measurement -> repeat
  • Big Bang never works or at least very scary.
  • Use The Strangler Pattern to avoid big bang.
  • Postel's Law - Architect for testability, develop for evolve-ability.
  • Last Response Moment - wait as long as you can (so you have more information), but no longer.
  • Beware of the silver bullet
  • Continuous Delivery - automate as much as you can. Note, you can still have approvals in your process, but the action once approved is still automatic.
  • Conway's Law - organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations
  • Keep systems poised for change.
  • Don't think yes, no to ideas, but instead think what are the risks and benefits. Then the other person get's to decide if it is worth it.
  • Do the simplest possible thing that might work because the complexity may never be needed.
Understanding the problem
Keeping up

Thursday, September 22, 2016

Open a Command Prompt (shell) on a Remote Windows computer

You can quickly open a Powershell prompt to a Windows computer. From here you can execute commands and command line applications. The script below allows you to do this easily and not have to put your password in in script file. It has the added benefit that you can execute it again (in the same Powershell session) and it will not re-prompt you for your credentials and instead use them again. This allows you to change where you connect to easily by just changing the name of the computer in the script

$username = 'domain\username'
$computerName = 'MyComputerHere'
if (!$cred) { $cred = Get-Credential -UserName $username -Message "Enter your Active Directory Credentials in format (domain\username)" }
Enter-PSSession -ComputerName $computerName -Credential $cred


Friday, August 26, 2016

ZAP (OWASP Zed Attack Project) Basics

ZAP Overview

OWASP Zed Attack Proxy Project (ZAP) is a popular Java-based and open source security tools. It is useful for performing penetration tests on your (or ones you have permission to test) web site for security vulnerabilities.

It works similar to Fiddler, but has several tools for helping to find the vulnerabilities, not just give you the ability to hack at requests. It does this by having you change the proxy in your browser to point to ZAP and then point ZAP to your corporate proxy or the web site itself.

It is extensible via a plug-in architecture. There are lots of videos and tutorials on how to use ZAP. For more details there is an excellent ZAP Getting Started Guide that you can use to get everything installed and explains how to start using it.

Another great resource is: Getting Started with ZAP and the OWASP Top 10: Common Questions

Manual Tests

It is important to keep in mind that not all kinds of penetration / security checks can be done automatically and ZAP does not cover them. It is probably worth reviewing the information on the OWASP .NET Project for .NET security specifics.


Installing ZAP

On the home page for ZAP there Download ZAP link, but you can also use this direct download link to the page.

Monday, August 22, 2016

OpenXava

OpenXava is a nice Java based model driven development model such that you only have to create the domain classes you want to model. You then decorate the properties to add additional details such as relationships, if it is required, specify views, etc. The UI and database is generated automatically for you. This could be a very nice tool to do a quick POC or demo. It is open source that uses Eclipse as the IDE. It could be a good replacement for projects that used IronSpeed Designer and don't mind switching from C# to Java.

Wednesday, July 20, 2016

Customizing Code Coverage in VS2015

The code coverage in Visual Studio 2015 by default includes the test code itself. This is often not desired. Below are some links go pages to help with this.

Customizing Code Coverage Analysis
Using Code Coverage to Determine How Much Code is being Tested
Troubleshooting Code Coverage
Troubleshooting missing data in Code Coverage Results

My conclusion is that the default settings that comes Visual Studio 2015 is not sufficient because it includes the test code in the test results. I found the .runsettings file to be a necessary change. When I did this, I was tempted to exclude test assemblies to the list of modules to exclude, but found this actually stopped the tests from being reported on. Instead I found it better to use namespace exclusions using the function tags.

For example,

   <Functions>
              <Exclude>
                <!--Exclude (Tests from the results) any functions in namespaces that have Test in them-->
                <Function>.*Test.*</Function>

I also found it useful to exclude tests (classes or methods) from the code coverage results that use particular attributes on them. For example,

 <Attributes>
              <Exclude>
                <!--Don't forget "Attribute" at the end of the name -->
                
                                <Attribute>^Microsoft\.VisualStudio\.TestTools\.UnitTesting\.TestClassAttribute$</Attribute>
                <Attribute>^TechTalk\.SpecFlow\.GivenAttribute$</Attribute>
                <Attribute>^TechTalk\.SpecFlow\.WhenAttribute$</Attribute>
                <Attribute>^TechTalk\.SpecFlow\.ThenAttribute$</Attribute>


I did however add any assemblies that have their own unit tests and code coverage reports to the list of modules to exclude. That way the code coverage of these assemblies is not counted twice.

The rest of the .runsettings file can be just as the sample file from MS.
Also, Here is a reporting tool that helps show code coverage results in a more user friendly manner.

Monday, June 20, 2016

Links for Powershell

Microsoft Team Foundation Server Client - Nuget package to integrate with TFS (version control, work item tracking, build, etc via REST APIs

Get Started with the REST APIs - shows the url format, usage, etc for TFS REST APIs.

TFS API Part 33 - Get Build Definitions and Build Details - example of how to get Build definition details.

Creating a Build Definition using the TFS 2013 API - actually in C#, but should work for Powershell also.

Pester - PowerShell testing. Support in VS2015 now.

NuGet Links

NuGet Package Restore - tells how to configure NuGet, TFS, etc to support different NuGet restore models.

Migrating MSBuild-Integrated solutions to use Automatic Package Restore

New-NuGetPackage Powershell script - create and publish NuGet packaes using a .nuspec or project file from Explorer or Powershell.

NuGet Package To Automatically Create Your NuGet Packages

TFS 2015 Build: NuGet restore from an internal repository

Friday, April 22, 2016

How to have TFS version control not ignore .DLL and .EXE in the packages directory.

If you are not using Visual Studio to refresh your Nuget packages then you need to check them in to source control. In my case this is the version control in TFS. To solve the problem you just need to add a .tfignore file to the packages directory.

The easiest way to do this is to create a new .txt file in the packages directory and call it ".tfignore.". Notice it starts and ends with the period. The last period will be removed automatically and you will be left with a file called ".tfignore".

You can also use notepad.exe to create the file. The trick with this is to change the type to All File and just type the name ".tfignore".

Once you have a file created open it in notepad and add the following lines to it.

!*.exe
!*.dll

The ! tell the source control to NOT ignore the files with these extension.


TFS 2015 Build Highlights

Goals of new system
  • Web based
  • Simple customization
  • Real-time output
  • Versioning of build definition
  • Build pools - share build agents across projects and collections
  • Cross-platform - even Mac and Linux
  • Full support for XAML-based builds
Build Overview
  • Templates
  • Web Applications
  • Unit Testing
  • Staging and drop locations
  • Azure Deployment
  • Powershell

Build Definitions
  • Web based
  • Several Templates 
  • No XAML templates, but still supports them
  • My Dev Machine produces the same outputs as TFS will
  • Task gallery
  • Auditing (changes are logged with notes if desired).
  • Web based diff tool.
Running a Build
  • Real-time log view
  • Project by project breakdown
  • Build Outputs
Build Customization
  • Settings
  • Variables
  • Triggers
  • Versioning of build
  • Draft (not published yet)
  • Templates - reuse
Hosted Agent
  • Visual Studio Online (not on premise)
  • One build at a time (No XAM L Builds)
  • < 1 hr
  • < 10 GB storage
  • No admin rights
  • Can't log on
  • Run on Visual Studio Online, not your agent
  • No interactive mode

Configuration
  • Options
  • Multi-configuration
  • Staging & build drop
  • Templates
Deployment
  • Deployment templates
  • Azure
  • PowerShell

Friday, April 15, 2016

Deployment Pipeline

Below are highlights from the chapter titled What is a Development Pipeline in the book Continuous Delivery.

What is a deployment pipeline is an automated manifestation of your process for getting software from version control into the hands of your users. It does not imply that no human interaction with the system through the release process, but instead ensures that the error-prone and complex steps are automated, reliable, and repeatable.

Goal: A deployment pipeline should allow you to create, test, deploy complex system of higher quality and at a significantly lower cot and risk than we could otherwise have done.

Deploy to any environment (Testing, Staging, Production, etc) with a click of a button
Repeatable deployment (deploy a previous version easily)


Minimum Stages of pipeline

The commit stage
Asserts that the system works at the technical. Meaning that it compiles, passes a suite of commit tests, and runs code analysis, creates binaries and prepare a test database for use by later stages. Commit tests are primarily unit tests, but should also include a small select of other types of tests such as BDD or integration tests to give a higher level of confidence that the build is working properly. The stage should take less than five minutes and definitely less than 10 minutes. Its purpose is to give quick feedback to the developer that something is not working and needs to be fixed before moving on to the next task.

Automated acceptance test stages
Asserts that the system works at the functional and nonfunctional level, and that it behaviorally meets the needs of its users and the specifications of the customers.

Manual test stages
Asserts that the system is usable and fulfills its requirements, detects any defects not caught by automated tests, and verifies that it provides value to its users. Typically, this would include exploratory testing environments, integration environments, and UAT (user acceptance testing).

Release stage
Delivers the system to users, either as packaged software or by deploying it into a production or staging environment (a staging environment is a testing environment identical to the production environment)



The process starts with the developers committing changes into their version control system. At this point, the continuous integration management system responds to the commit by triggering a new instance of our pipeline. This first (commit) stage of the pipeline compiles the code, runs unit tests, performs code analysis, and creates installers. If the unit tests all pass and the code is up to scratch, we assemble the executable code into binaries and store them in an artifact repository.

A application is comprised of three parts:

  • Binaries
  • Data
  • Configuration


Principles

  • Keep the deployment pipeline efficient, so the team gets feedback as soon as possible.
  • Build upon foundations known to be sound. 
  • Keep binary files independent from configuration information
  • Keep configuration information in one place


Best Practices

Only build your binaries once - the binaries are the .NET assemblies and should only be compiled once in the Commit stage. The later stages use these binaries. Building at each stage or even for testing, code analysis, etc. is considered an anti-pattern.

Deploy the same binaries to all environments. For example, the same binaries used in UAT should be used in production. This can be used by using hashes of the binaries to verify they are the same.

Store the binaries in a file system, not version control.

Configuration Data is not included in the binaries and should be kept separately. - Configuration data is data that is different between the environments such as ip address, urls, database connection strings, external services, etc. It can also include data that changes behavior of the application.

Use Configuration files and store in source control - Each environment should have its specific settings stored in a configuration file that is specific to that environment. This correct file can be determined by looking at the hostname of the local server if one server or in multi-server environments through the use of environment variable supplied to the deployment script. Alternatively the data could be stored in a database as well.

Binaries must be deployable to every environment

Deploy the same way to every environment - this ensures that the deployment process is tested effectively.

Test configuration settings - check to see if external services are actually available when deploying, Ideally if the service is not available the installation should not be.

Smoke test - after an installation run some very basic scenario to make sure the external services are accessible. Check particularly sensitive urls, hosts, etc, make sure certain pages come up that depend on configured information work .

Locked down environments - Production should only be changed once proper change management has been used for approval of the change. The same should be for test environments, but the approval process is easier.

Each change should propagate through the pipeline instantly - the first stage (commit) should be triggered upon every check-in and each stage should trigger the next one immediately upon successful completion of the previous one. The system needs to be smart enough to check for new changes before running tests. For example, if three people check-in before an integration test finishes the latest changes should be bundled into what is tested in the integration tests. For manually triggered stages (later ones) they need to wait for user interaction.

If any part of the pipeline fails, stop the line -

Tuesday, April 5, 2016

FeatureToggle Review

FeatureToggle
FeatureToggle is a popular feature toggle package that has a good architecture, regular updates, a training video, doesn't use magic strings, and is extensible. The documentation can be found here.

FeatureToggle Review
Multiple platforms: .NET Desktop/Server, Windows phone, Windows Store
No magic strings
No default fallback values - throws exception
Flexible provider model to allow for swapping out of parts.
Straight forward to use
Extensible via Providers and Custom Toggles.

Downside
I don't like that I have to specify the connection string that I want to use for each toggle. It would be nice if it could use a default connection string instead. It seems a bit clunky to specify a key for the feature toggle AND one to point to the database connection string. Perhaps that can be changed.

Support Configurations

  • Compiled
  • Local configuration (app.config / web.config or App.xaml)
  • Centralized SQL Server
Architecture
Built-in Toggle classes use providers to get configurations out specific sources such as database, configuration file, etc.
Strongly typed objects => Compile time checks to make sure it is completely removed.

Built-in Toggles
  • AlwaysOnFeatureToggle
  • AlwaysOffFeatureToggle
  • SimpleFeatureToggle
  • EnableOnOrAfterDateFeatureToggle 
  • EnableOnOrBeforeDateFeatureToggle
  • EnableBetweenDatesFeatuerToggle
  • EnableOnDaysOfWeekFeatureToggle
  • RandomFeatureToggle - could use for random a/b testing
  • SqlFeatureToggle - toggle from value in SQL Server database

Installation
Nuget: FeatureToggle
NOTE: FeatureToggle.Core is installed when FeatureToggle is installed.

Overview of Usage
For each feature that a feature toggle is required, a new class that inherits from one of the built-in Toggle classes is required.


Compiled Toggle Usage

public class MyFeature : AlwaysOnFeatureToggle {}

In place where want to use it, create a new instance of the MyFeature class.

i.e.

public MyFeature MyFeature1 = new MyFeature();

Can add to ViewModel to use on Razor page.

i.e. 

@if (Model.MyFeature1.FeatureEnabled)
{
....html here
}


Config File Usage

In this example it would be the same as above, except we change the class we inherit from so that we can get the value from the config file. Note, the ViewModel and Razor page did not need to change even though we are changing what the logic is for hiding the feature.

i.e. 
public class MyFeature : SimpleFeatureToggle{}

We do need to add it to the configuration file. We just need to add a new key to the appSetting. It must follow the convention such that the key starts with "FeatureToggle." and concatenated to the name of our class (MyFeature in this case).

i.e.
<appSetting>
<add key="FeatureToggle.MyFeature" value="true"/>
</appSettings>

SqlFeature Toggle Usage
We will continue our example as before, except we need to also tell the FeatureToggle where the database is.

In a database create a table called Toggles
ToggleName nvarchar(100) not null (primary key)
Value bit not null
NOTE: The table and column names or types are not important since we will write the SQL to access it later.

We need to add the connection string to the list of connection strings in the web.config.

<connectionString>
<add name="MyDB" connectionString="typical connection string here" />
</connectionString>

Insert a record with the ToggleName being the name of our class (MyFeature in this case) and value = True.

We need to specify the connection string in the web.config. We put it in as an appSetting key-value as

<appSettings>
<add key="FeatureToggle.MyFeature.ConnectionStringName" value="MyDB" />
<add key="FeatureToggle.MyFeature.SqlStatement" value="select value from Toggles where ToggleName = 'MyFeature'" />
</appSettings>
Removing a Toggle
Delete the class. Rebuild project. Review each compiler error. Remove from web.config. 

Creating a Custom Toggle
Custom Toggles are good when the value is based on some business logic.

Continuing on the examples...

public class MyBusinessLogicToggle : IFeatureToggle 
{
public bool FeatureEnabled { get { return businessLogicHere;}}
}

Change our Feature Toggle so that it inherits from the custom feature toggle. This can be reused with multiple feature toggles;

public class MyFeature : MyBusinessLogicToggle {}

Creating a Custom Provider
The base class for Feature Toggles has a property called ToggleValueProvider. Setting this value to a custom Provider allows us to change the default provider to a custom one.

We can create a custom provider by creating a subclass of a existing provider. 

i.e. 
public MyProvider : IBooleanToggleValueProvider
{
public bool EvaluateBooleanToggleValue(IFeature toggle)
{
return logicHere;
}
}

Some Alternatives


References
Most of the data is sourced from the Implementing Feature Toggles in .NET with FeatureToggle on PluralSight.


Monday, April 4, 2016

Feature Toggles

Release Feature Toggles
For the development team to manage release process instead of source control branches. Should be short lived and thus removed to keep technical debt low.

Business Feature Toggles
For the benefit of the business such that they tailor the user experience based on business requirements. Consider a regular edition vs pro edition of software. Typically long lived or forever.

Configuration Types

Compiled Configuration
Toggle value baked-in to assembly.
Features can only be enabled or disabled with new release
Multiple machines may need to be kept in sync

Local Configuration
Toggle values held in app/web.config
Can be changed without needing an new release
Multiple machines may need to be kept in sync

Centralized Configuration
Toggle values held in central location, database, network share, UI, etc.
Can be changed without needing a new release
Whole system can be managed centrally
Business can manage potentially via UI.

Toggle decision points
User interface element toggling
Toggle links to new page / screen
Number of decision points

Defaults
Don't set defaults for toggles. An exception should be thrown if state of toggle is unknown.

Alternatives to release toggles
Deliver smaller part of features (small releases and more often)
Build the backend first and then build user interface last. Delays testing of UI to end of cycle.

Suggested Open Source Solutions
FeatureToggle - Popular choice, actively developed, nice choice.
FeatureSwitch
FeatureSwitcher






Friday, April 1, 2016

Branching Strategy for Agile team desiring continuous integration that must work with a restriction of a slow release cycle

Solution Requirements:

  1. Continuous Integration - Frequent integration of code changes to limit big scary merges. Avoid long periods without merging back for others to see.
  2. Changes to UAT - Allows for a release to stabilize at a slow pace than development.
  3. What is approved in UAT is what gets released to production.
  4. Hot fixes to Production - Allow for emergency changes to a production release
  5. Parallel UAT and Development - Development should be considered working, but not ready for production. UAT would a candidate for release to production pending user approval.
  6. Code Reviews - each feature should be code reviewed. Ideally any merging that is done by everyone is be done at the feature level, but the developer making the changes can commit changes more often. Thus once the feature is done, it can be bundled up, code reviewed, and merged and ultimately released.
Goal: 
Deliver software with a high level of confidence. Confidence to be derived from unit tests, integration tests, BDD tests (End to End tests), and UAT user approval. This implies that what was approved in UAT is what should be moved to production.

A compromised Solution:
The solution we weretempted to implement is best described in the post A successful Git branching model by Vincent Driessen, but we are using TFS, not Git. It is basically says, each feature is done in a branch, though I would add that changes should be pulled from the develop branch DAILY. Ideally, we should be able to look at the develop branch and see features branches being merged in, not individual commits. The individual commits would have been done to the feature branch itself.This makes it easy to see what is in the develop branch and also when they are merged to the release branch and the main branch it is again at the feature level.

Once a feature passes all regression tests (unit tests, integration tests, BDD tests) and has been code reviewed it can then be merged into the develop branch. Ideally a feature branch would be only a few days. If it is longer than a few days the feature should be merged into the develop branch to avoid a big scary merge. If a feature is not finished by the end of a sprint and part of the feature has been merged to the develop branch, but part is unfinished a feature toggle (See Feature toggles note below on how these can be implemented) should be added so that it is not accessible to the user.

A feature should be merged into the release branch by the develop responsible for it once they are confident (passed unit tests, integration tests, BDD tests, and exploratory testing) it is ready for release (pending UAT approval). Unless a specific need comes up, we will only have one release branch and label each release in it when it is merged into master (main). A separate versioned release branch can be created if needed though.

If a change (bug fix, enhancement, etc) is need to the release branch because of something that needs to be changed in UAT the change should be made on the release branch and then merged to the develop branch. 

Similarly, if an emergency change (hotfix) is needed on the production code, the change should be made on the master (main) branch. It should then be merged into the develop branch and eventually to the release branch (as part of the usual merging into release branch process).

A label should be created automatically or explicitly when merging from development to release and then to master (main).

In the end this may not work so well when trying to continuously integration. It can work if the branches are short lived enough. The real issue is the unknown time it takes to integration when the effort for the integration of branches is large and the unknown time needed to stabilize after the integration and before release. This tends to be needed at least once before a release. Smaller more frequent integrations has proven to be a more stable amount of effort required and the code base tends to keep the code base stable when using feature toggles (feature hiding).

Refactoring is an exception to this strategy and the changes should be merged immediately to minimize difficult merges. This make this strategy not as useful unfortunately.

NOTE: This strategy could probably work by merging branches (both to and from the develop branch) at least once a day. The question is what is the benefit of doing a branch then? Branching is for isolation of development and successful continuous integration demands not being isolated. A compromise may work as well such that the branching on the develop branch is removed and instead everyone develops on the develop branch. A branch should not live more than 3 days to be considered short lived.


A Better Approach: 
Have one mainline (no other branches unless doing a bugfix for UAT or production). The key is that the mainline is ALWAYS deployable. Use feature toggles for every feature and enable or disable as needed to make current release stable or that what is enabled is deployable. Remove old feature toggles after not needed anymore. Use labels for each release or deployment. It has the advantage of always integrating, and only create a branch if needed in the case of an emergency hotfix or something. Alternatively, the mainline could be released for the hotfix if all previously unreleased feature toggles are still turned off, but that is a judgement call based on the situation. Note, the release label would be on the new branch in this case and then merged back into the mainline immediately. It is simple to manage and could work very well for open source or fast moving projects that can release as desired. It may be a bit difficult to use for processes where there is a lengthy UAT approval cycle by end users. This difficulty can be mitigated by making the change to the mainline, disabling the appropriate feature toggles and putting in UAT again. Could work well for a continuous deployment model too I would think. This entire model (as does any CI/CD model) relies on having automated regression tests.

Some guidelines:

  • Avoid branches - Branches should only rarely used. For example, only use branches for things like releases or spikes. In general things that will not be used again. 
  • No long-lived branches - Long-lived branches are opposed to successful continuous integration and should be avoided at all costs. This includes things like refactoring, long term development, etc. Instead use feature toggles (feature hiding).
  • Integration Daily - Get the latest from the mainline and Commit to the mainline at least once a day
  • Pass tests - all tests should still work BEFORE (and after) checking in your changes. This helps keep the mainline deployable.
  • Incremental changes - It may take a little longer to do a bigger change in smaller incremental changes, but the effort and time spent is worth the effort because the mainline is always deployable.
Optional:
  • Label or Branch for releases - A branch or label can be created just before a release. Once created testing and validation of the release is done from this branch. New development is performed on the mainline. i.e. Only critical bug fixes are done on the release branch and immediately merged back to mainline. Branches are always off the mainline, not existing release branches.
References:
A successful  Git branching model - what we are implementing here
Feature Toggles - enable / disable features using compile or runtime flags to make a feature visible to the end user. Pete Hodgson's perspective on how to implement / issues is here. Martin Fowler says this.
Branching and Merging: Ten Pretty-Good Practices
Continuous Delivery by Jez Humble and David Farley. See chapters on Chapter 13: Managing Components and Dependencies, and Chapter 14: Advanced Version Control.
Some different version control implementations - worth looking at since they are bit simpler than the successful git branching model.

Wednesday, March 23, 2016

Calling Powershell script in TFS 2013 Build Definition

Overview
The Build Definition Default template in TFS 2013 has added the ability to execute Powershell scripts before and after a build (Pre-build and Post-Build). This is different than the pre/post build scripts in Visual Studio. This is much easier than customizing the build template as was required in earlier version of TFS. TFS 2015 is different again, but I believe will still use Powershell scripts for extending its functionality.

Your Powershell Script
You will need to put the Powershell script in a file (a good thing) and reference it in the Pre-build script path or Post-build script path properties of the Process | Build | Advanced screen. There are corresponding Pre-build script agruments and Post-build script arguments properties as well that allow parameters to be passed to the Powershell scripts.

To avoid additional issues or configuration you may want to keep the file in the same path in TFS Source Control. If you don't want or need it to be under source control you can put it directly on the TFS Controller server and reference that path.This can be notably easier for experimenting, but you'll lose the benefits of source control unless manually keep changes in source control.

Logging 
Shows up under the Diagnostics tab (TFS web | your TFS project | Build tab | your build definition | Diagnostics tab)
Write-Host
Write-Output
Write-Warning
Write-Verbose

NOTE: If you want the Write-Verbose calls to show up you need to add [CmdletBinding()] attribute to the beginning of your .ps1 file AND add -verbose to the Pre-Build script agruments property in the build definition.

Environment Variables

  • $env.TF_BUILD_BUILDDEFINITIONNAME
  • $env.TF_BUILD_BUILDDIRECTORY
  • $env.TF_BUILD_BUILDNUMBER
  • $env.TF_BUILD_BUILDREASON
  • $env.TF_BUILD_BUILDURI
  • $env.TF_BUILD_DROPLOCATION
  • $env.TF_BUILD_SOURCEGETVERSION
  • $env.TF_BUILD_SOURCESDIRECTORY
  • $env.TF_BUILD_TESTRESULTSDIRECTORY
You can also put dir anywhere in the Powershell script to see all the environment varialbes.

NOTE: Much of this information is actually gathered from here.
Good information specific to TFS2013 can be found here.

Friday, March 11, 2016

Search and Replace in a MS Word document

I reviewed options for editing MS Word files in DOCX format and decided that the free and one that has the greatest chance of existing in 10 years is just using the Open XML Word Processing SDK (see my review for details).

For complex stuff maybe a different choice would make sense. However, my requirements are simple much like a mail merge:

  1. Taking an existing MS Word file (DOCX format) as input. (Use it as a template)
  2. Search the MS Word file for some placeholders / tags and replace with real data, but don't save any changes to the original file since it is my template.
  3. Be able to write changes to a new file or stream file to browser for download
As it turns out this can be done in very few lines of code and for FREE. Below is my solution.

// Sample command line application
static void Main(string[] args)
{
    string filename = "Test.docx";
    var oldNewValues = new Dictionary<string, string>();
    oldNewValues.Add("pear", "banana");
    oldNewValues.Add("love", "like");
    byte[] returnedBytes = SearchAndReplace(filename, oldNewValues);
    File.WriteAllBytes("Changed" + filename, returnedBytes);

}


// Does a search and replace in the content (body) of a MS Word DOCX file using only the DocumentFormat.OpenXml.Packaging namespace.
// Reference: http://justgeeks.blogspot.co.uk/2016/03/how-to-do-search-and-replace-in.html
public static byte[] SearchAndReplace(string filename, Dictionary<string, string> oldNewValues)
{
    // make a copy of the Word document and put it in memory.
    // The code below operates on this in memory copy, not the file itself.
    // When the OpenXml SDK Auto saves the changes (that is why we don't call save explicitly)
    // the in memory copy is updated, not the original file.
    byte[] byteArray = File.ReadAllBytes(filename);
    using (MemoryStream copyOfWordFile = new MemoryStream())
    {
        copyOfWordFile.Write(byteArray, 0, (int)byteArray.Length);

        // Open the Word document for editing
        using (WordprocessingDocument wordDoc = WordprocessingDocument.Open(copyOfWordFile, true))
        {

            // Get the Main Document Part. It is really just XML.
            // NOTE: There are other parts in the Word document that we are not changing
            string bodyAsXml = null;
            using (StreamReader sr = new StreamReader(wordDoc.MainDocumentPart.GetStream()))
            {
                bodyAsXml = sr.ReadToEnd();
            }

            foreach (var keyValue in oldNewValues)
            {
                string oldValueRegex = keyValue.Key;
                string newValue = keyValue.Value;

                // Do the search and replace. Here we are implementing the logic using REGEX replace.
                Regex regexText = new Regex(oldValueRegex);
                bodyAsXml = regexText.Replace(bodyAsXml, newValue);
            }

            // After making the changes to the string we need to write the updated XML string back to
            // the Word doc (remember it is in memory, not the original file itself)
            using (StreamWriter sw = new StreamWriter(wordDoc.MainDocumentPart.GetStream(FileMode.Create)))
            {
                sw.Write(bodyAsXml);
            }
        }

        // Convert the in memory stream to a byte array (binary data)
        // NOTE: These bytes can be written to a physical file or streamed back to the browser for download, etc.
        byte[] bytesOfEntireWordFile = copyOfWordFile.ToArray();

        return bytesOfEntireWordFile;
    }
}



After calling the SearchAndReplace method you have the bytes that make up the MS Word file. It is up to you what you want to do with it. You can save it a file or stream it to the browser when a user clicks a link to download a file.

To write the file to another file (leaving the original unchanged), use the following line:

File.WriteAllBytes(newFilename, returnedBytes);


To stream the bytes back to a browser via a Action method in a ASP.NET MVC controller, use the following:

return File(returnedBytes, "application/msword", "Filename to download as here.docx");


NOTE: Original code inspired by this MSDN example and additional a post or other page I can't remember (sorry).

Friday, March 4, 2016

Modelling project gives warnings when Unity is used.

The issue as reported here and copied / summarized below:

I am using Visual Studio Enterprise 2015 and tried to create a layer diagram in order to generate and validate dependencies. But this fails because VS is throwing warnings while building the modeling project:
CurrentVersion.targets(1819,5): warning MSB3268: The primary reference "...\ClassLibrary4\bin\Debug\ClassLibrary4.dll" could not be resolved because it has an indirect dependency on the framework assembly "System.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" which could not be resolved in the currently targeted framework. ".NETFramework,Version=v4.0". To resolve this problem, either remove the reference "...\ClassLibrary4\bin\Debug\ClassLibrary4.dll" or retarget your application to a framework version which contains "System.Runtime, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a".
I figured out that if you remove Unity the warnings are gone and dependencies are shown as expected.
What is the reason for this behavior and is there any workaround?
I tried the Unity prerelease package and also another targeting frameworks. No effect at all. The issue is reproducable with a new project after adding a modelling project and using unity in one referenced projects.

The solution that worked for me as well:

The problem that VS2015 was compiling the modelling project using the wrong target framework (4.0):
Task Parameter:TargetFrameworkDirectories=C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.0
There is no TargetFrameworkVersion in the project file of the modelling project (*.modelproj). But after adding it to the first property group it is compiling and validating as expected. Without any warnings.
Solution:
  1. Unload the modelling project
  2. Right Click -> Open *.modelproj
  3. Add the following line in between the first PropertyGroup open / close tags.
<TargetFrameworkVersion>v4.5</TargetFrameworkVersion>
NOTE: Replace v4.5 with your target framework (the target framework for your application)

After your done, your project file will start with something like this:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="14.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" />
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
    <SchemaVersion>2.0</SchemaVersion>
    <ProjectGuid>{f60f6f86-e1d5-4d33-b8ed-0bf2172780cf}</ProjectGuid>
    <ArchitectureToolsVersion>2.0.0.0</ArchitectureToolsVersion>
    <Name>ModelingProject1</Name>
    <RootNamespace>ModelingProject1</RootNamespace>
    <SccProjectName>SAK</SccProjectName>
    <SccProvider>SAK</SccProvider>
    <SccAuxPath>SAK</SccAuxPath>
    <SccLocalPath>SAK</SccLocalPath>
    <ValidateArchitecture>true</ValidateArchitecture>
    <TargetFrameworkVersion>v4.5</TargetFrameworkVersion>
  </PropertyGroup>
...




Is a Repository really a good design pattern

Here are some interesting articles on the subject.

Is the Repository pattern useful with Entity Framework - lists several posts on why the repository pattern is not really that useful and doesn't add much value.

Favor query objects over repositories - I really like this idea. It adheres to a SRP, DRY, and is testable (if you mock the entity framework though it is still not easy).

Thursday, January 7, 2016

Lazy Loading using Windsor Castle

If you are not using Castle Windor and you want to do lazy loading of the myRepository you could write some code like the following.

      
public class MyClass
{
        private IRepository myRepository;
       
        public IRepository MyRepository
        {
            get
            {

                if (myRepository == null)
                {
                    myRepository = new Repository();
                }
                return myRepository;
            }
        }

}

The is quite repetitive code and hard codes the dependency and creation of the Repository object. The  Factory pattern could be used to abstract its creation to remove this dependency, but again writing a proper Factory without Castle Windsor is a tedious task. Check out the previous link for details on this.

With Castle Windsor this is trivial.

public Lazy MyRepository { get; set; }

You will need to turn on lazy loading. This is done in your Installer class.

    public class MyInstaller : IWindsorInstaller
    {
        public void Install(IWindsorContainer container, IConfigurationStore store)
        {
            container.Register(

                Component.For()
                         .ImplementedBy().LifestyleTransient()

                Component.For()
                         .ImplementedBy()
                );
           
        }
    }


There is a slight difference in how you use the Property now though.

Instead of something like this as you would do in our first:

MyClass obj = new MyClass();
obj.MyRepository.SomeMethod();

Using the Castle Windsor code we need to call .Value to trigger the creation and dependency injection (lazy loading).

MyClass obj = new MyClass();
obj.MyRepository.Value.SomeMethod();

The best part about this solution is that it required very little work to implement, and IRepository and Repository are mapped in the Installer where they should be, not in the classes themselves. We also aren't calling Resolve or interacting with the IoC container anywhere except as recommended (see Three Calls Pattern).

Some technical ideas are based on discussions here.

Wednesday, January 6, 2016

Send CONTROL-ALT-DELETE to Remote Desktop to all changing password

As it turns out CONTROL-ALT-DELETE can't be sent to a Remote Desktop session. However, CONTROL-ALT-END can and this brings up the same screen on a Remote Desktop and allows you to change password.

If you are using Citrix instead of MS Remote Desktop you can try Control-F1.

If you are using Mac OSX, you can try FN-CONTROL-ALT-DEL.

If all else fails, you can bring up the Microsoft on screen keyboard and click the CONTROL-ALT-DELETE keys.

I got these tricks from here.