Placement reflections – 2 Months in!

So i’m 2 months in to working for Inspire Tech UK LTD, an online Ecommerce business based in salford. One thought that came straight to mind when writing that was, GEE HOW TIME FLIES! I’m not going to lie, the first few weeks where quite difficult, but i’m writing that off as nerves and  trying to impress from the get-go. The past few weeks however have been brilliant, and looking back, I can safely say that I’ve learned far more than I had anticipated.

Despatch Manager Online – Why did you give me such a headache?

One of the projects that stands out the most was the Migration from using Royal Mail’s Despatch Express to the new and not so improved (wait, did i say that out loud?) Despatch Manager Online. After getting access to the company’s account to begin development, It seemed like it was going to be a simply integration process. I thought that all I needed to do was modify our existing code, to generate a slighly different flat file (.txt). Oh how wrong I turned out to be. Not that writing the code was difficult, but that DMO’s rules on exactly what information goes in each field is far too restrictive. I’ll write another blog post to further explain this in detail, however for now lets just say foreign characters and certain other well used characters, such as a forward slash, are disallowed. The system doesn’t attempt to remove or replace them (like DE did), it instead just throws an error message when trying to import. Since we trade online, we get a lot of orders from overseas countries where International Accent Marks and Diacritics are used for addresses. So what was my solution? A very hacked together mess that ‘works’. I’m still not overly happy with it and when I get some spare time, I’m going to revisit the problem and try develop a more suitable solution. However I was on the clock on this one, and ‘It works for now’.

Basic Image Editor – A terribly named, yet neat tool (if I do say so myself)

More recently, just a few days ago in fact, I looked at how my manager edits and resizes product Images for use on our website and eBay etc. At the moment there is a partially automated solution, in that there is a macro set up in Photoshop to resize the images. My manager complained that it used far too much in terms of system resources (i.e CPU and RAM), and took a while doing multiple images. Plus we needed a solution to upscale images to a slightly higher resolution without causing Pixelation. Today, I’ve finished writing a new tool to take care of all this. Basic Image Editor (Look at how awesome I am with names….) can take single or an entire folders worth of files and either resize them, or add additional white space around the original images. My main objective here was efficiency. I wanted the tool to take care of several files in next to no time at all. I wrote the solution over the past couple of days, and spent a few hours today optimising it and fixing memory leaks. The result actually surprised me quite a lot.

We have a folder, containing approximately 6,800 500×500 jpg images (Perfect for testing). I expected the tool to process maybe 5 images per second before trying it. The first run actually achieved 10 images/second. Not bad. But I wanted to push it further. After optimising certain parts of my written logic and going for a second run it achieved… wait for it… 100 images/second. This includes the entire process of reading, editing and writing the output file (quite a demanding I/O operation). Current resource usage on my development machine peaked at 40% CPU (expected) and just 60MB of RAM! Needless to say I am pretty damn proud of it and can’t wait to properly demonstrate it to my manager tomorrow!

 

There have been other things that I’ve been up to as well such as website tweaks, fixes and additions, however If I was going to share everything I’ve been up to, then this would be a rather lengthy post. Needless to say I am thoroughly enjoying my time at Inspire tech, my colleagues are great and I’ve become very passionate about the programming work I do. The sense of satisfaction I get from seeing something I’ve spent time and effort developing, finally working and being used in the production environment is wonderful. It’s definitely giving me the motivation to want to learn more and become an better developer. Choosing to do an Industrial Placement as part of my degree, has already proven to be one of best decisions I’ve ever made. 🙂

Parsing a .CSV file in C#.NET

Whilst working on my latest project at work, I did some research into the different ways I could parse a .CSV file. This sort of operation is quite common, and so there were a wide range of methods people had come up with to do it. Each one had it’s Advantages and drawbacks. I needed a solution that could handle a file with thousands of records in as little time as possible, since there were other, more time consuming operations that also needed to be done. Below are the 2 examples that I worked with.

1. The Simple Parser.

Originally posted: http://www.switchonthecode.com/tutorials/building-a-simple-csv-parser-in-csharp
public List parseCSV(string path)
{
    List parsedData = new List();

    try
    {
        using (StreamReader readFile = new StreamReader(path))
        {
            string line;
            string[] row;

            while ((line = readFile.ReadLine()) != null)
            {
                row = line.Split(',');
                parsedData.Add(row);
            }
        }
    }
    catch (Exception e)
    {
        MessageBox.Show(e.Message);
    }

    return parsedData;
}

Explanation:

With not knowing much about parsing a .csv file, I tried out this solution. It initially worked with the .csv file I was given to parse, since the content itself didn’t actually contain any comma characters. That’s exactly where this solution falls down. When the line is read in as a string, the Split() method is called on it. Split() takes a parameter which is the character you want to break each field into. Each field is then passed to an Array called ‘row’, with each array object being the parsed field. Then, because we’re likely to be reading more than a single line, the row is passed into a List object called ‘parsedData’, which is returned at the end of the function. Pretty simple stuff.

Why not to use it:

As discussed, each field in a typical .csv is delimited using the Comma character. But what happens when your reading in some text that uses full English grammar? Say for example you have a product description that reads: “The all new i7 CPU, is the latest cutting edge innovation from Intel.”. When Split(‘,’) is called, it will be added to the array as shown below:

row.GetValue(0).ToString() = “The all new i7 CPU”

row.GetValue(1).ToString() = “is the latest cutting edge innovation from Intel”

Hopefully you see why this is an issue. The text has been split into 2 separate values in the array as it’s been treated as a new field.

2. The better solution.

Originally posted here: http://stackoverflow.com/questions/3507498/reading-csv-file by user: David Pokluda.
TextFieldParser parser = new TextFieldParser(@"c:\temp\test.csv");
parser.TextFieldType = FieldType.Delimited;
parser.SetDelimiters(",");
while (!parser.EndOfData) 
{
    //Processing row
    string[] fields = parser.ReadFields();
    foreach (string field in fields) 
    {
        //TODO: Process field
    }
}
parser.Close();

My Implimentation:

public List parseCSV(string path)
{
    List<string[]> parsedData = new List<string[]>();
    string[] fields;

    string line = parser.ReadLine();

    try
    {
        TextFieldParser parser = new TextFieldParser(@"c:\temp\test.csv");
        parser.TextFieldType = FieldType.Delimited;
        parser.SetDelimiters(",");

        while (!parser.EndOfData) 
        {
            fields = parser.ReadFields();
            parsedData.Add(fields);

            //Did more stuff here with each field.
        }

    parser.Close();

    }
    catch (Exception e)
    {
        MessageBox.Show(e.Message);
    }

    return parsedData;
}

 

Explanation:

Whilst this is technically VB.NET code, it works perfectly fine within C#.

You’ll need to add a reference to Microsoft.VisualBasic, by right-clicking on ‘References’ and selecting ‘Add new Reference’. You’ll also need to include required namespace at the top of the code to parse the CSV. This is done as follows:

using Microsoft.VisualBasic.FileIO;

This will then allow you to use the TextFieldParser class within VB.NET.

This method will read each field from the .csv, based upon the given delimiter that is passed to parser.SetDelimiters(), as opposed to reading each line and then splitting. It will not split any content text up like the previous example, so long as the entire field is surrounded by double quote marks (needs verification, however seems to work perfectly fine for me).

Any questions or comments? Feel free to post below!

From ‘Lazy’ Student, to Web Developer – Placement Week 1.

Been a little while since I’ve last blogged, but the past few weeks have been rather busy.

As part of my Degree course at Salford University, I opted to do an Industrial Placement for a year. With how competitive the Jobs market is at the moment, any relevant experience to my field of Web Development would defiantly put me at an advantage over other graduates when the time comes. I submitted for a position at Inspire Tech UK, a small company based in Salford, working on their website / back-end systems using ASP.NET (C#). The company trades from their own website as well as Ebay, Amazon and Play under the name eoutlet.

Continue reading