Wednesday, December 23, 2015

Resharper 10–Resharper Build


Resharper Build is the new feature introduced by JetBrains with Resharper 10. I really like this feature. Put simply, ReSharper Build manages the build process, and decides if each individual project needs to be built, or not. When building a solution, if a project doesn’t need to be rebuilt, it is intelligently skipped — what’s faster than doing nothing?

  • Build happens out of process

  • Efficient timestamp monitoring

  • Public API surface monitoring.

When a project is built, ReSharper will scan the just compiled output assembly. If its public API hasn’t changed, then ReSharper Build knows that it doesn’t need to build any of the referencing projects, and they are intelligently skipped. This has a huge impact on larger solutions. If a change is made to the business logic of a root assembly, then traditionally, that would require the rest of the solution to be rebuilt. ReSharper Build will only rebuild the root assembly, and skip the rest. 

When Resharper Build is working. You can see the progress of the window of Build and Run. There are some projects are built in parallel and some are built in sequentially. It is, in fact, visually intuitive.


Friday, August 14, 2015

Visual Studio 2015 In Action - 1


After playing with VS 2015 for a week, I am pretty happy and impressed by it. It has been greatly improved over VS 2013. Don't forget to install Color Schema Editor. It comes with 11 color themes. Solarized (Dark) is my favorite color theme. 

What’s new in C# 6?

1.       Roslyn compiler. Now you can ship your API with analyzer and fixer.

2.       Using static... for example using static System.Console;

3.       Immutable object. Now it’s so much easier with getter-only properties with readonly backup fields. You can set the getter-only properties in the constructor.

4.       property Lamda expression.

5.       $ sign to string formatting. So now you can string.Format($”{x} – {Y}”);

6.       Keyword for nameof , now you can use the name of the variable and it’s very refactoring friendly.

7.       Null checker ?.

8.       Added finally to await.


What’s new in IntelliTrace in VS 2015

1.       You can filter by Category like ADO, gestures.

2.       You can zoom in the timeline.

3.       You can view historical stack trace code.

4.       Standalone version of IntelliTrace.

What’s new in WPF in 2015

1.       Blend for Visual Studio 2015

2.       In-place change the template without showing the different document

3.       UI debugging tools for WPF


In previous versions of Visual Studio, sharing code meant going to a website to manually publish a repository, jumping through a myriad of workflows and manual steps like creating accounts and services just to start sharing code. In Visual Studio 2015, the process of getting your local repository onto Visual Studio Online (VSO) has been dramatically simplified. 

Monday, July 20, 2015

Visual Studio 2015, ASP.NET 4.6, ASP.NET 5 & EF 7


VS 2015 is released on Monday 7/20/2015. You can download it now from MSDN subscriber downloads. 

The feature list is:

  • JSON editor
  • ReactJS editor
  • Grunt/Gulp support
  • Bootstrap support
  • EcmaScript 6
  • HTTP/2
  • and many more



Friday, July 17, 2015

NodeJS in Action - 1

Node.js is an event-driven, server-side JavaScript environment. Node runs JavaScript using the V8 engine developed by Google for use in their Chrome web browser.  The major speed increase is due to the fact that V8 compiles JavaScript into native machine code, instead of interpreting it or executing it as bytecode. http://blog.modulus.io/top-10-reasons-to-use-node


Node.js way: synchronous.
To perform a filesystem operation you are going to need the fs module from the Node core library. To load this kind of module, or any other "global" module, use the following incantation:

var fs = require('fs')

Now you have the full fs module available in a variable named fs. All synchronous (or blocking) filesystem methods in the fs module end with 'Sync'. To read a file, you'll need to use fs.readFileSync('/path/to/file'). This method will return a Buffer object containing the complete contents of the file.

Node.js way: asynchronous.
Instead of fs.readFileSync() you will want to use fs.readFile() and instead of using the return value of this method you need to collect the value from a callback function that you pass in as the second argument.

Remember that idiomatic Node.js callbacks normally have the signature:

                function callback (err, data) { /* ... */ }

Also keep in mind that it is idiomatic to check for errors and do early-returns within callback functions.

Node.js module
Create a new module by creating a new file that just contains your directory reading and filtering function. To define a single function export,
you assign your function to the module.exports object, overwriting what is already there:

module.exports = {
    foo: function () {
        console.log("foo is here.");
    },
    bar: function () {
        console.log("bar is here.");
    }
};

To use your new module in your original program file, use the require() call in the same way that you require('fs') to load the fs module. The only difference is that for local modules must be prefixed with './'. The '.js' is optional here and you will often see it omitted. So, if your file is named mymodule.js then


var mymodule = require('./mymodule.js')

Thursday, January 15, 2015

DB Tables Growing Big?


There were these transaction tables (only a few) in my database for trade data (trade, trade legs etc) growing bigger and bigger over the years. To a certain point, it SLOWED down noticeable on updating any trade data. Although it was easy to blame the database server was not powerful enough, there are better designs to avoid this.

How can I keep up the read/write performance for these ever-growing tables? There are many ways from database-level technologies, such as partitioning tables. But I want to a simpler, tangible, easy-to-maintain, and all-in-control design. 

I figure that an easy way to boost the performance is to create one archive table and one active table for each growing table. I created the jobs to archive the records over night. Any records older than one day will be pushed over to archive table. Now Read operations are against a view which union the active tables and the archive tables. CRUD operations are on active table. It worked out really well. Atlas, until I needed to update the archived table. 

UPDATE, hmm....can it be just read-only? No. So, how to solve this problem? Since insert is much faster than update, can we change the UPDATE operations against archived data into INSERT operations? Yes but before doing so, version mechanism has to be introduced for the row records. In my solution, I simply introduced an IDENTITY (SQL Server) column. Now I can have the same record with multiple reversions. For the update and delete actions, I choose to archive the records using triggers. For the insert actions, I choose to update the unique record ID with PK if it is the first revision. The last catchy point I have to consider is that the archiving job might break the integrity of the active tables. In the case when I insert to active tables, I need to clean up the active tables before the insert action. In the end, this solution worked out pretty well.


Thumbs Up to GitHub Copilot and JetBrains Resharper

Having used AI tool GitHub Copilot since 08/16/2023, I’ve realized that learning GitHub Copilot is like learning a new framework or library ...