Assignment 2 Record linkage and data cleaning


Rate this product

COMP3430 / COMP8430 – Data Wrangling –
Assignment 2
Worth 15% of the final grade for COMP3430 / COMP8430
Overview and Objectives
This assignment covers the topics of record linkage and data cleaning, with a focus on identifying possible data quality
problems in data sets and taking necessary steps to correct them. Similar to assignment 1, we ask you to generate a second
data set, and to then both identify data quality problems in these data sets and to also fix them. Fixing all the data quality
issues with these data sets will likely take more time than you have. This is intentional and we don’t expect you to
correct everything. It also reflects the real world where there is almost always more data cleaning that can be done. So
prioritise your effort based on the tasks we ask you to do, and the likely benefits to the end use of the data set. Make
sure you justify and describe these choices.
Assignment Structure
The assignment consists of four (4) tasks as described below which can be worth different numbers of marks. Make sure you
answer all aspects of each task.
If you have any questions on the assignment please post them on Wattle – however do not post any partial solutions,
program codes, equations, calculations, URLs, etc. or any hints on how to solve any of the assignment tasks.
This assignment will be marked out of 15, and it will contribute 15% of your final course mark.
Note that not all tasks and questions are equally difficult. For some of the tasks there is no single right or wrong answer.
Marks will be awarded based on your reasoning and the justification of your decisions and explanations, as well as clarity
and correctness of writing.
We will endeavour to release your marks and feedback within two teaching weeks after the submission deadline. If you feel
we have made an error in marking, you have two weeks following the release of marks to raise any issues with the course
convener, after which time your mark will be considered final. If you request that we re-mark your assignment, we
will re-mark the entire assignment and your mark may go up or down as a result.
Data Set Generation for this assignment
As with Assignment 1, for this assignment each of you will again work on individual data sets that will be based on a second
master data set we will provide, and a new data generation program we will also provide.
Note that we have generated the master data set based on real data (such as lookup tables of names, addresses, etc.), and
we have then corrupted and modified certain aspects of that data set. We have intentionally tried to include the types of
relationships, features, errors, and other data quality issues that you might find in real data sets. Any similarity to real
persons or places is entirely coincidental.
Download the second master data set from Wattle (to be made available in week 6) named dw assignment master2.csv.gz,
and the new data generation program named Copy both these files into one folder /
directory, and run the code using Python 3 in the following way:
python3 your ANU ID dw assignment master2.csv.gz
The program will generate an output data set named data wrangling education 2021 your ANU ID.csv, and print
some output which contains the following important lines (for the example ANU ID u1234567):
$ python3 u1234567 dw assignment master2.csv.gz
Your student data set for the data wrangling 2021 assignment 2 has been generated and written into file:
data wrangling education 2021 u1234567.csv
Your ANU ID check code is: d76225bc
Your student data set check code is: 47e659a11137
*** Check this pair of numbers is in the list provided on Wattle, if not contact the course convenor.
ˆ Write down your two check codes because you must provide them with the assignment submission. This
will allow us to validate that you have generated and used the correct data set.
ˆ Check that the pair of check codes you get (like in the example above d76225bc and 47e659a11137) is in the list
of check codes we will provide on Wattle (in week 6 under the Assignment 2 document). This will allow you to
check that you have generated the correct data set.
ˆ Note that the check codes are different for the data set you generated for assignment 1 and this new data set.
ˆ You must use your individual generated data set, data wrangling education 2021 your ANU ID.csv (generated based on your ANU ID), for tasks 2 to 4 of this assignment, together with the data set you generated
for Assignment 1, data wrangling medical 2021 your ANU ID.csv.
Assignment Tasks
ˆ Task 1 (3 marks): Generate two strings of numbers based on your ANU ID (excluding the first character ‘u’):
– s1 is the string ‘123’ concatenated with the first four digits of your ANU ID.
– s2 is the string ‘123’ concatenated with the last four digits of your ANU ID. Include any leading zeros.
For example, if your ANU ID is u9800765 then s1 = ‘1239800’ and s2 = ‘1230765’
Now manually calculate the similarities between these two strings and include in your assignment both your
workings (equations or edit matrices) as well as the final results for:
1. The Jaccard similarity based on unigrams (q = 1).
2. The Dice coefficient similarity based on bigrams (q = 2).
3. The Levenshtein edit distance between the two strings, assuming costs of 1 for all types of edits.
4. The Levenshtein edit distance between the two strings, assuming a cost of 2 for substitutions and costs of 1 for inserts
and deletes.
If you do not include your workings you will not receive any marks. Python or R code are not acceptable
workings. For sub-tasks 3 and 4 you must show the full edit matrices as your workings (as discussed in the lectures).
Your will receive 0.5 marks for correct calculations and results for sub-tasks 1 and 2, and 1 mark each for correct calculations
and results for sub-tasks 3 and 4.
For the following tasks, you need to use your individual education data set generated as per the instructions above together
with your individual medical data set generated for Assignment 1.
By assuming the final task to be conducted (by a data analyst after data wrangling has been completed) on
these data sets is to examine links between an individual’s education, their employment history, and their
health, you need to merge the two data sets into one single data set and correct data quality issues in this
merged data set.
We do not require the use of any specific tool, software package or programming language and you are free to choose whichever
you feel most comfortable with. However, please note that due to the size of the data sets, manual inspection and correction
of individual records will be very time consuming. So make your decisions on data cleaning accordingly.
Once you cleaned and merged the two data sets, you need to upload a single .csv file that contains your final merged and
cleaned data set, named data wrangling merged 2021 your ANU ID.csv in Wattle. For example, for an ANU ID
u1234567 the final data set should be named as data wrangling merged 2021 u1234567.csv.
As part of marking your submission we will compare your submitted data set against a cleaned data set that
we have generated.
ˆ Task 2 – Merging the data sets (4 marks): You must merge your two individual data sets into a single data set using
the Social Security Number (SSN) attribute that is in common to both data sets. Your new data set must include
a header line with the original attribute names (except for any new attributes you have generated) and
must include the SSN attribute – named ‘ssn’ as in the two data sets to be merged.
You will need to address the following four aspects of this merging step in the assignment answer field in Wattle:
1. How many SSN occurred in common in both data sets? How many occurred only in the medical data set, and how
many occurred only in the education data set?
2. If there were records that only occurred in a single data set, describe what you did with them, and explain / justify
3. If there were duplicate records in an input data set (with the same SSN), describe what you did with them, and
explain / justify why.
4. If there were any inconsistencies between records in the two data sets with the same SSN, what types of inconsistencies
were there, and how many of each type? How did you deal with these inconsistencies, by either resolving them or
processing them otherwise? Describe and justify the approaches you took.
ˆ Task 3 – Missing and incorrect values (4 marks):
1. Following the missing patterns table we discussed in the labs and used in Assignment 1, find the combination of three
attributes with the highest number of missing values (i.e. the three-attribute combinations with the largest numbers
of records with missing values). Provide the attribute names and the corresponding number of records with missing
values in your merged data set.
2. For the two attributes with the highest number of missing values (individually) in your merged data set, either:
– consider if you can impute these missing values. If so describe the approach you have taken to impute
missing values, and justify why you have taken this approach; or
– if you decided you cannot impute missing values in an attribute then describe and justify why you have not
done any imputation.
3. Describe what incorrect or impossible values you found in attributes in your merged data set, and provide how many
such incorrect or impossible values are there for each attribute. Also describe why you believe these values are
impossible or incorrect.
4. Describe how you were dealing with the incorrect or impossible values identified in your merged data set (for example
correcting them in some way or another).
ˆ Task 4 – Other data cleaning (4 marks): Perform other data cleaning tasks that you think are important, keeping
in mind the final use of the cleaned data set, to examine links between an individual’s education, employment,
and their health.
Things you may wish to consider include (but are not limited to):
1. Do any of the attributes in your merged data set have data quality issues with regard to the data quality dimensions
we discussed in Lecture 5 that are not covered by Tasks 2 and 3.
2. Are any data reduction or transformation tasks required?
3. Are there values that seem to be in the wrong attribute(s)? If so how can you correct them?
4. Any other problems you detected that you think are worth correcting.
You should consider up to four extra data exploration and data cleaning tasks, and for each describe what you have
done, justify why you have done it (within the context of the final use of the cleaned data set described above), and any
numerical results relevant to the task that you think are important.
Marking: For each question in Tasks 2 to 4 you will receive up to one mark for appropriately describing and justifying your
For task 2, you will receive up to 4 marks for describing and justifying your approach to merging the two data sets and
dealing with any issues that arise.
For task 3, you will receive up to 4 marks for your treatment of missing or otherwise problematic values in the merged data
set, along with appropriate justifications of your decisions.
For task 4, you will receive up to 4 marks for any further data exploration and data cleaning you undertake, along with
appropriate justifications of your approaches.
We will mark you on both the actual data cleaning you have done (as evidenced by comparing your submitted merged data
set to our clean data set) and the justification of the choices you made, as evidenced by your answers provided in Wattle.
Other Aspects
For all textual answers in this assignment, English writing mistakes and typographical errors will attract small penalties.


There are no reviews yet.

Be the first to review “Assignment 2 Record linkage and data cleaning”

Your email address will not be published. Required fields are marked *

Scroll to Top