Angular2 (Typescript) + Webpack + Coverage using remap-istanbul result in Error: Could not find source map

I am currently working on different project using Angular 2 with webpack. For these projects we use remap-instanbul to generate coverage reports.

The last week I was upgrading the solution from Angular 2.0 to Angular 2.4 and everything seems to be working as expected except the coverage. During unit testing this error occurred:

Error: Could not find source map for: "C:\Sources\NG2\src\app\app.component.ts"
    at CoverageTransformer.addFileCoverage (C:\Sources\NG2\node_modules\remap-istanbul\lib\CoverageTransformer.js:148:17)
    at C:\Sources\NG2\node_modules\remap-istanbul\lib\CoverageTransformer.js:268:14
    at Array.forEach (native)
    at CoverageTransformer.addCoverage (C:\Sources\NG2\node_modules\remap-istanbul\lib\CoverageTransformer.js:266:24)
....
....
....

Some files however seem to be working and some others not.... Really strange....

But after reading some discussions on the internet I found one solution that was to downgrade the "istanbul-instrumenter-loader" to version 0.2.0. And voilà it worked... So If you like to stop with the first fix you find on the internet.. STOP READING NOW.... Otherwise continue for the even simpler solution.

Ok, so I am not the kind of person to stop with the first fix I find and since you are reading this you are also willing to find the best solution instead of the first solution.. So here is the 'real' solution:

In the webpack configuration for the unit tests check if you have a configuration for typescript like:

ts: {
        compilerOptions: {
            sourceMap: false,
            sourceRoot: './src',
            inlineSourceMap: true
        }
    }

If you have some configuration like this, remove both line about sourcemaps. The configuration will look like this:

ts: {
        compilerOptions: {
            sourceRoot: './src'
        }
    }

Now you can use the newest version of the "istanbul-instrumenter-loader" and it works like a charm.

So have fun with your unit tests and keep up with the coverage ;-).

Start TFS vNEXT (2015) build for a specific commit using GIT

Today I was trying to start a TFS build with a specified commit id using the "QUEUE NEW BUILD" windows of TFS vNext.

But when trying to do so TFS reports the following problem: "The value specified for SourceVersion is not va valid commit id"

 

After a lot of searching on the internet I found the problem dat TFS vNEXT uses the SHA1 version of the commit id. To get the SHA1 commit id you can execute the following command:

git rev-parse HEAD

This command returns the latest commit id on HEAD and returns this in the SHA1 format:

c7ab3fd13e30b55394afc3ebb5c73240c662b160

Using this SHA1 version of commit id in the TFS windows will start the Build for the specified commit. If you want to start a build for an older commit perform the following steps:

  1. Find the Commit id from TFS (ex: a9d8f7e7)
  2. Open the Command prompt and type 
    git rev-pars a9d8f7e7
  3. Output of the command should be something like: a9d8f7e76d24d5b4f1cc15378eb9513a6e2cdb4d
  4. Copy the SHA1 commit id
  5. Queue new build in TFS
  6. Past the output of the git command in the "Commit" textbox

Hopefully Microsoft will add some support for the smaller commit id's in the future, but for now this can be the workaround.

Update (Thanks to Travis and Artour)

There is a way to get the CommitID using Visual Studio. If you open the History from a branch and open de Commit Details of a commit you can press the 'Actions' link and choose 'Copy Commit ID'

If you are using SourceTree you see the full CommitId.

 

Happy coding ;-)

Update local develop branch without checkout

Early this year I wrote a post about Git Command that I think are handy to know: Git commands I keep forgetting

After working with Git more and more I have gathered some other command as well that I would like to share with you:

Update the local develop branch while working on a feature branch

We use git flow from nvie a lot. Which is very nice because it handles a lot of stuff for you. But one thing that I did not like was when I wanted to finish the feature using: 

$ git flow feature finish FEATURE-NAME

I first had to update the develop branch because otherwise it will merge with an old version of develop. Steps that I had to follow where:

# switch to develop
$ git checkout develop

# update develop
$ git pull

# switch to feature branch
$ git checkout feature/FEATURE-NAME

# merge from develop to feature branch
$ git merge develop

# finish feature
$ git flow feature finish FEATURE-NAME

# push changes
$ git push

It is also possible to merge from origin/develop to your feature branch, but still if you want to finish the feature branch you have to make sure that the local develop branch is up to date.

With the following command you can eliminate the first 3 commands:

# update develop branch without checkout
$ git fetch origin develop:develop

So if the feature branch is not up to date with develop the flow becomes the following:

# update develop
$ git fetch origin develop:develop

# merge from develop to feature branch
$ git merge develop

# finish feature
$ git flow feature finish FEATURE-NAME

# push changes
$ git push

 

 

Mock your Typescript Classes and Interfaces (the easy way) [Angular 2.0.0 Final]

Updated to Angular 2.0.0 Final Release (using TestBed instead of addProviders) and with Spy functionality

Code: https://github.com/jjherscheid/ts-mocks
Npm: https://www.npmjs.com/package/ts-mocks

If you are familiar with Unit Testing in Typescript / Angular2, you may have created a lot of Mock objects for your unit tests like:

// Mocks the CookieService from angular2-cookie
class MockCookieService {
  public get(key: string): string { return null; }
  public put(key: string, value: string) { }
}

Such a Mock object does not have to implement all the methods and can be used by Injecting or Providing the class like so:

let cookiesService: CookieService;

// Add the providers
beforeEach(() => {
  TestBed.configureTestingModule({
    ...,
    providers: [{ provide: CookieService, useClass: MockCookieService }]
    });
});

// Inject values
beforeEach(inject([ ..., CookieService], (... , _cookieService: CookieService) => {
  ...
  cookieService = cookieSrv;
}));

What I don't like about this approach is the way the Mock is created. It should have the same methods, but there is nothing that guarantees that the interface is the same as the original object you want to Mock.

To overcome this problem I wanted to create some Mock<T> object that can be used to create your Mock object with the intellisense you want.

The idea is to be able to specify the object like so:

// Mock for the CookieService from angular2-cookie
mockCookieService = new Mock<CookieService>();
mockCookieService.setup(ls => ls.get);
mockCookieService.setup(ls => ls.put);

Where is it possible to setup values for properties and methods. The Mock class can then be used in the beforeEach the following way:

// Create a variable for the Mock<T> class
let mockCookiesService: Mock<CookieService>;

// NOTE: Change the useClass to useValue and use the 

beforeEach(() => {
  // Create new version every test
  mockCookieService = new Mock<CookieService>();

  // Setup defaults
  mockCookieService.setup(ls => ls.get);
  mockCookieService.setup(ls => ls.put); 

  TestBed.configureTestingModule({
    ...
    providers: [{ provide: CookieService, useValue: mockCookieService.Object }]
  });
});

In you unit tests it is now possible to use the mockCookiesService:

it('using with default setup from beforeEach', () => {
  let r = sut.getValue('Test');
  expect(r).toEqual(null);
});

it('setup different value in test', () => {
  mockCookieService.setup(ls => ls.get).is(key => key + '-mock');

  let r = sut.getValue('Test');
  expect(r).toEqual('Test-mock');
  // integrated spy
  expect(cookieService.get).toHaveBeenCalled();
});

I created a small library that makes it possible to use Mock objects like about. The code can be found at: https://github.com/jjherscheid/ts-mocks and is also available as npm package at: https://www.npmjs.com/package/ts-mocks.

Or use the npm install:

npm install ts-mocks

 

I you have any ideas feel free to fork the git repo and make suggestions! I hope you enjoy the solution.

Webpack: There is another module with an equal name when case is ignored

During working on my project I got the following warning from Webpack:

WARNING: There is another module with an equal name when case is ignored.

After some googling I got a solution! Check all the 'require' keywords used in your project for casing. Well that did not work because all the casings where the same. And then... Suddenly I found the issue. Check both console windows below. The windows above gave the warning. The other window does not give the warning.

Did you see the problem? Well... It is the casing of the drive. The first windows has a lower case c: and the last one has a uppercase C:.

This problem occurred because I opened the first windows from Total Commander and the last window from the Start Menu.

Using the Specification Pattern in Typescript

In some of my C# project I used the Specification pattern for handling business rules. An example of this can be found at https://en.wikipedia.org/wiki/Specification_pattern. But I was wondering if we could create the same experience in Typescript as well. So lets give it a try. I just used the example of the WIKI site but converted it to Typescript.

Specify the interface

Before creating specifications we need the ISpecification interface

export interface ISpecification<T> {
    IsSatisfiedBy(candidate: T): boolean;
}

I don't like the way Wiki creates a ISpecification<T> with the And/Or/Not operators so I created a second interface which contains the operators for composite specifications.

export interface ICompositeSpecification<T> extends ISpecification<T>{
    and(other: ICompositeSpecification<T>): ICompositeSpecification<T>;
    or(other: ICompositeSpecification<T>): ICompositeSpecification<T>;
    not(): ICompositeSpecification<T>;
}

Creating Base classes

First we will create a base class for the composite specification.

export abstract class CompositeSpecification<T> implements ICompositeSpecification<T>
{
    abstract IsSatisfiedBy(candidate: T): boolean;

    and(other: ICompositeSpecification<T>) : ICompositeSpecification<T> {
        return new AndSpecification<T>(this, other);
    }

    or(other: ICompositeSpecification<T>) : ICompositeSpecification<T> {
        return new OrSpecification<T>(this, other);
    }  

    not() : ICompositeSpecification<T>{
        return new NotSpecification<T>(this);
    }
}

As you can see it is also possible in Typescript to use abstract classes with abstract methods! The class must also be 'exported' so it can be used as base class for other Specifications in your project. Now we only need to create the And/Or and Not specification like described in the Wiki page. First the AndSpecification

class AndSpecification<T> extends CompositeSpecification<T>{
    constructor(
        public left:ICompositeSpecification<T>,
        public right:ICompositeSpecification<T>){
        super();
    }

    IsSatisfiedBy(candidate: T) : boolean{
        return this.left.IsSatisfiedBy(candidate) 
           && this.right.IsSatisfiedBy(candidate);
    }
}

As you can see, there is not much code needed for creating this class. The 'left' and 'right' member are specified by the constructor so I can skip the code for settings properties like

this.left = left;
this.right = right

With this AndSpecification in place the other specification are more or less the same except the IsSatisfiedBy method

class OrSpecification<T> extends CompositeSpecification<T>{
    constructor(
        public left:ICompositeSpecification<T>,
        public right:ICompositeSpecification<T>){
        super();
    }

    IsSatisfiedBy(candidate: T) : boolean{
        return this.left.IsSatisfiedBy(candidate) 
           || this.right.IsSatisfiedBy(candidate);
    }
}

class NotSpecification<T> extends CompositeSpecification<T>{
    constructor(
        public spec:ICompositeSpecification<T>){
        super();
    }

    IsSatisfiedBy(candidate: T) : boolean{
        return !this.spec.IsSatisfiedBy(candidate);
    }
}

Example of usage

As an example I created a Person class with name, age and gender property

enum Gender {
  Male,
  Female
}

class Person {
  constructor(
    public name: string,
    public age: number,
    public gender: Gender) {
  }
}

The create a list of Persons and use that inside the app class

import { Component, OnInit } from '@angular/core';
import { CompositeSpecification, ISpecification} from './specifications';

@Component({
  selector: 'my-app',
  templateUrl: 'app/app.component.html'
})
export class AppComponent implements OnInit {
  persons: Person[] = [];

  ngOnInit() {
    this.persons.push(new Person('Mike', 34, Gender.Male));
    this.persons.push(new Person('Perry', 8, Gender.Male));
    this.persons.push(new Person('Gregory', 6, Gender.Male));
    this.persons.push(new Person('Rachel', 3, Gender.Female));
    this.persons.push(new Person('Betty', 35, Gender.Female));
  }
}

I created this example with Angular2 but this will work in any Typescript application

Let say you want to filter the list of persons with the following specification:

  1. All mature persons
  2. All mature female persons
  3. All mature persons and females
  4. All immature persons or female

For this we need the following specification classes

export class IsMatureSpecification extends CompositeSpecification<Person>{
  IsSatisfiedBy(candidate: Person): boolean {
    return candidate.age > 18;
  }
}

export class GenderSpecification extends CompositeSpecification<Person>{
  constructor(private gender: Gender){ super(); }

  IsSatisfiedBy(candidate: Person): boolean {
    return candidate.gender == this.gender;
  }
}

We can add these specifications to the AppComponent class

export class AppComponent implements OnInit {
    persons: Person[] = [];

    private femaleSpec = new GenderSpecification(Gender.Female);
    private matureSpec = new IsMatureSpecification();

    ....
  }

In the AppComponent I created a private method for filtering the list of persons using a specification

  private executeSpecification(spec: ISpecification<Person>) {
    let filteredList: Person[] = [];
    this.persons.forEach(person => {
      if (spec.IsSatisfiedBy(person)) {
        filteredList.push(person);
      }
    });
    return filteredList;
  }

With the private method from above I can easily show you that the specifications work with the following code

  get females() {
    return this.executeSpecification(this.femaleSpec);
  }

  get matureFemales() {
    let matureFemales = this.femaleSpec.and(this.matureSpec);
    return this.executeSpecification(matureFemales);
  }

  get matureOrFemales() {
    let matureFemales = this.femaleSpec.or(this.matureSpec);
    return this.executeSpecification(matureFemales);
  }

  get immatureOrFemales() {
    let matureFemales = this.femaleSpec.or(this.matureSpec.not());
    return this.executeSpecification(matureFemales);
  }

Combine this with the following app.component.html and you can see filtered lists of persons

<h1>Specification Patterns</h1>
<div *ngFor="let person of persons">
    {{person.name}} - {{person.age}}
</div>
<hr>
<div *ngFor="let female of females">
    {{female.name}} - {{female.age}}
</div>
<hr>
<div *ngFor="let female of matureFemales">
    {{female.name}} - {{female.age}}
</div>
<hr>
<div *ngFor="let person of matureOrFemales">
    {{person.name}} - {{person.age}}
</div>
<hr>
<div *ngFor="let person of immatureOrFemales">
    {{person.name}} - {{person.age}}
</div>

This will give the following output in the browser

Conclusion

YES! It is possible and very ease to use the specification pattern in Typescript. 

 

Note: After I finished this blog post I found an npm package that has more or less the same implementation of the specification pattern at: https://www.npmjs.com/package/ts-specification

 

The REST you know!! Or not? Part 2

In my previous post about REST you could read about the basics of a REST interface. In this post I will try to explain the HTTP Verbs that you should use with REST in detail.

Reponse type

Responses of the REST API can be any form, but mostly used are XML and JSON. In this blog I will use JSON as request and response types. But in principle it does not matter if you use XML or JSON.

Safe and Idempotent

Every time that you communicate with a REST API some sort of operation is executed on the server. This can be a request operation, but can also be adding, updating or deleting objects. When an operation does not change any data on the server side an operation can be called 'safe'. This means that if you execute the operation multiple times no data will be changed on the server and the data is 'safe'. If an operation can be executed multiple times and the outcome of the operation is the same after the first operation this is called 'idempotent'. 

Create a new object (POST)

The POST verb is used for creating objects. The image below shows an example of a POST for creating a Rental. The example below shows the POST with a rental object as body (JSON object). In the rental object you can see the Id of the Car and the Id of the Customer that rents the car. There are also other properties like startdate and many more. The POST is executed on the '/rentals' resource uri.

 

 

If the POST is successful the server will respond with a HTTP Status Code 201 (Created). The location of the newly created object is returned as Uri in the Location Header of the response. By executing the location URI the objects information can be requested.

If you POST a second time with same information 2 identical objects will be created with a different identifier. If your object model has constraints on the data whereby the object cannot exists multiple times the server may respond with a HTTP Status Code 409 (Conflict) which means that the object cannot be created again.

When executing the POST data on the server will likely change and therefor this operation is not 'safe'.  The post is no 'idempotent' operation because when you execute the POST multiple times the response results after the first request will not be the same as the first response result.

In the above example both the ID of the Customer as the ID of the Car are send with the request body. One other option is to use the navigation relation between Customer/Rental/Car to POST the new rental. This is shown in the picture below.

 

 

In this example you can see that the uri is different from the first example. Here the customer found by requesting through the customers resource using the ID of the customer as URI part. This way of navigation can be useful when the client for example has a list of Customers. When you click on a customer you can choose a Car and select the car for rental. Navigation to the customer was already setup using the URI so the request object can be sent without a CustomerID.

If the data send to the server in the request is invalid the server may respond with the HTTP Status Code 400 (Bad Request). The client then know that is should change the request body before sending it again.

Querying one ore multiple objects (GET)

When a POST is performed the location of the new created object is returned in the response. To get the information from the URI the URI must be requested with the GET verb. The GET verb can also be use the request a list of objects. If a GET is successful it will return a HTTP Status Code 200 (OK). In the response the requested object or list of objects is returned.

 

 

As you can see in this example the GET is executed on the location that was returned by the POST operation before. The body of the response contains the information as posted before. If the object requested does not exists the server will respond with a HTTP Status Code 404 (Not Found). The client will then know that is must create the object before requesting it.

In the URI above you can see that one rental object is requested with the id 'a4f5s2f'. But what if you want to display a list of all the cars that can be rent. For getting lists with objects the GET verb is also used but without the ID part of the URI.

 

 

The body of a GET from a single item of a GET with a list of item differ because the GET on a list of items will return an array of objects instead of one object.

The GET operation should be a 'safe' operation and cannot change data on the server. If you request the GET uri multiple times the result will be the same every time you request again therefor the GET operation is a 'idempotent' operation.

Updating an object (PUT)

If you want to update a resource the PUT verb should be used. The PUT is mostly used with an URI that contains an identifier so the server knows which object it should change. In the request body of the PUT the complete object is send to the server and the server will overwrite the data with this new information. If the update is successful the server will respond with the HTTP Status Code 200 (OK). This is the same as responded by the GET verb. Like with the POST operation if the data sent using the PUT verb is not valid the server will respond with the HTTP Status Code 400 (Bad Request).

 

 

As you can see in the example above the same URI is used as with the GET request. The body of the response contains the same information as the body of the request because if the update was successful the data is changed. If the data was already changed on the server it may respond with a HTTP Status Code 409 (Conflict). In some scenario's the server may respond with HTTP Status Code 204 (No Content) instead of the HTTP Status Code 200 (OK). This is mostly done because of minimizing data traffic between server and client. With the 204 the server presumes that the client will hold the objects after adding them to the PUT request.

If the server cannot find the object requested it will respond with a HTTP Status Code 404 (Not Found).

What about creating objects with a PUT? PUT can be used to create objects also and if that's the case then the server will not respond with a 404 (Not Found) but with a HTTP Status Code 201 (Created). So when to use PUT and when to use POST if both can create objects? The answer is fairly simple: If the id's are in control of the server POST should be used for creating objects. If the id's can be decided by the client PUT can also be used for creating objects.

The PUT is not 'safe' and but is 'idempotent'

Deleting objects (DELETE)

When objects are no longer needed the client can request a deletion of the object. The deletion is performed with a DELETE verb. The URI used for the DELETE operation is the same as the URI for the GET and/or PUT. If the object is deleted successfully the server will respond with a HTTP Status Code 204 (No Content).

 

 

It is also possible for the server to respond with HTTP Status Code 200 (OK) if the body contains the deleted object. This can be useful when implementing some sort of undo mechanism. On other (success) response  can be HTTP Status Code 202 (Accepted), this is used when the server accepts the delete but the object is not deleted yet.

The DELETE operation is not a 'safe' operation but can be implemented as 'idempotent' operation. If a object that the client is trying to delete is already deleted the server may response with the HTTP Status Code 200 (OK). This indicates that the delete is executed even if the object was not there. In this scenario it does not matter if you execute the delete multiple times the result will be the same and therefor the operation is 'idempotent'. If the server returns a HTTP Status Code 404 (Not Found) on deletion of a not existing object the operation is not 'idempotent' because executing the operation multiple times will then result in a 200 the first time and 404 every next attempt.

It's up the the designer of the API how this works, I personally prefer the 'idempotent' version of the DELETE operation. 

 

That's enough for today!

 

Other post(s) in this serie for know are:

.NET Core: A great promise but for now confusing!

I've been working on different kinds of .NET projects and was always fan of Visual Studio as IDE. .NET was supposed to be a platform for different devices and operating systems but somehow Microsoft did not manage to get this working. With .NET CORE and Asp.NET Core they seem to be on the right way now. .NET Core is open source and can be found at http://dotnet.github.io/, Asp.NET Core can be found at https://get.asp.net/ and with documentation on https://docs.asp.net/en/latest/.

I started with the Asp.Net documentation and managed to get some app running with and without Visual Studio using DNX and DNU. Using JSON file for managing projects is something I like if the IDE supports this correctly. When creating your app in Visual Studio it will automatically add the necessary dependencies. GREAT!!!

So after playing around with this I started to like the way it works even if its still not finished yet.

And suddenly..... Boom..... Asp.NET RC2 is not comming soon because DNX and DNU will be obsolete due to the new .NET Core CLI. Which look a lot like DNX from command line perspective but it is really different. .NET Core CLI can be use to create .NET applications that are compiled to IL, but it is also possible to compile to a native application that does not need any .NET Framework installed on the system that is running. WOW!!! Seems cool right??

Lets give it a try:

  • Create a directory for your project (ex: /MyConsole)
  • Open de command prompt in the directory and run the following command:
    dotnet new
  • Three files are created: NuGet.config, Program.cs, Project.json
  • Run the following command to restore dependencies:
    dotnet restore
  • And to build you project run:
    dotnet build
  • An executable is created in the 'bin\Debug\dnxcore50\win7-x64' folder.
  • To create a native application use the VS2015 x64 Native Tools Command Prompt and use this command:
    dotnet build --native
    NOTE: Remove the bin folder from the previous build otherwise noting will show up!
  • An native executable is created in the 'bin\Debug\dnxcore50\win7-x64\native' folder.

WOW!!

So there are two different ways of compiling .NET Core applications DNX/DNU and .NET CLI. And if that's not confusing enough lets look into the project.json file of your project just created! 

    "frameworks": {
        "dnxcore50": { }
    }

What? dnxcore50? Yes... DNX is used for console application even if you use .NET CLI because the 'dotnet' (which is the moniker for .NET Core CLI) does not yet support console applications. (If you know how to get this working, please let me know).

If you create a .NET library you can use the 'dotnet' moniker for building dll's.

So if you create a Console and a Class Library, the Class Library can use the 'dotnet' framework moniker but for the console you still need to use the 'dnxcore50' moniker.

I think Microsoft is going to the right direction with this .NET Core CLI, but with DNX/DNU still in place and no fully functional IDE for .NET Core CLI yet I think it is confusion.

Hopefully they get all the pieces together soon!

The REST you know!! Or not? Part 1

Why this POST if you already know REST?

I have noticed that there are many different opinions about what REST is and how you can use to create an API that is easy to use. Sometimes REST API's are complex just because they are not created using correct REST. 

As a serious developer I don't want to write 'Bad Code' and hopefully I am not the only one! If you are planning to use REST and don't want to write 'Bad Code' either read this article. If you don't mind to write 'Bad Code' also read this article because you then know how to create 'Bad Code' by not implementing the solutions below ;-).

 

What is REST?

Let's begin with the basics. REST stands for Representational State Transfer (see WIKI - REST). Okay... So now you know, right? In short, REST is a communication protocol between client and server that is stateless. This means that the server does not keep track of the state between two requests. The communication can be cached for performance reasons. In most cases HTTP(S) will be used for communicating REST.

How does the communication work?

REST uses the HTTP verbs GET / POST / PUT / DELETE in most of the cases. Other Verbs can be used but in this article I will focus on these 4 verbs.

A lot of blogs and articles write about the REST verbs as a CRUD pattern with the following assumptions:

  • POSTCREATE (Insert)
  • GETREAD
  • PUT = UPDATE
  • DELETEDELETE

The above assumption is partially correct but you are not limited to this assumption. A PUT can also be used for the CREATE. This will be show in the PUT chapter below.

How do you think?

When you know other communication protocols between client and server you may have noticed that a lot of protocols are RPC (Remote procedure call) based. Which means that the client will request some operation from the server which will than result the output of the operation. REST is not RPC based but RESOURCE based, which means that your thinking pattern must change from RPC style (GetCars, RentCar) to RESOURCE based style (GET /cars, POST /rentals). Let's take a look how this really works. When you are implementing a REST API try to use Nouns and not to use Verbs for your URI's.

Domain model

When defining a REST interface the domain model for the rest interface must be clear. During a project changes will always occur, but some kind of domain model must be available to be able to define your model for the REST interface. 

The model I will use in my examples is about renting cars:

The model consists of 3 classes:

  • Customer which is the person that rents a car
  • Address which is an address of the customer 
  • Rental which is the resource for holding the rental period and link the Customer to a Car.
  • Car which is a car that can be rent.

When defining resources you have to identify the root aggregates of the model. For my model I see 3 root aggregate Customer, Rental, Car. From the application perspective these objects are likely to get their own screen with a list of object. All customers in the system, all Rentals in the system, all Cars in the system. Address is not a root aggregate because we will never lookup an Address on its own. Address will always be displayed for a Customer.

By identifying the root aggregates the following URI structure is appropriate:

/customers
/customers/{id}/addresses/
/rentals
/cars

One other resource can be the resource to show rentals for a customer:

/customers/{id}/rentals

 

In the next post(s) I will be talking about the different HTTP Verbs in detail, different Response Results, Security issues, Maturity model and lots of other topic about REST.

 

Other post(s) in this serie for know are:

Git commands I keep forgetting

(see also Part 2)

Most of the time that I use Git I will use it from inside Visual Studio. But sometimes it is needed to use the console window for git commands. In this post I will show some git command I keep forgetting myself.

Show remote branches

If you use Visual Studio 2015 remote branches are available in the IDE, but when using Visual Studio 2013 this is not the case. If you want to check which branches are remote available use te following command:

git remote show origin

Remove unpushed commits

Sometimes it happens that I already committed my changes but don't want to push the commits to the remote branch because change of directions. One possible thing to do is to revert the commit, but this results in 2 commits one with changes and one with the reverted changes. I personally don't like the fact to have commits for something I don't want. To actually remove the commits you can use the following command:

git reset --hard origin/master

Remove deleted branches from local cache 

Sometimes it happens that a branch is already removed from the remote git but is still in your git list when executing the 'remote show origin' action. When working with GitFlow for example it happens a lot the Visual Studio still shows the remote feature branches even when they are already finished. To clear the list en remove the branches that are not available any more use the following command:

git fetch -p

Credentials asked every time you pull from git 

If you use Git from the command line frequently it will ask your credentials every time you execute the 'git pull' command. To avoid the problem of entering your credentials every time, you can store the credentials in the credential helper using the following command:

git config --global credential.helper "cache --timeout=3600"
git config --global credential.helper wincred

When you perform a git pull git wil ask the credentials one time and will store them in the credential cache.

 

There are a lot more Git commands, but the commands I mentioned above are the command that I forget all the time. Hopefully this can help you as well.