I am attempting to convert all files with the csv extension in a given directory to json with this python script.
I am wondering if there is a better and more efficient way to do this?
Here is my code:
import csv
import json
import glob
import os
for filename in glob.glob('//path/to/file/*.csv'):
csvfile = os.path.splitext(filename)[0]
jsonfile = csvfile + '.json'
with open(csvfile+'.csv') as f:
reader = csv.DictReader(f)
rows = list(reader)
with open(jsonfile, 'w') as f:
json.dump(rows, f)
EDIT:
Here is the sample input:
Here is the sample output:
[{
"username": "lanky",
"user_id": "4",
"firstname": "Joan",
"middlename": "Agetha",
"lastname": "Lanke",
"age": "36",
"usertype": "admin",
"email": "[email protected]"
}, {
"username": "masp",
"user_id": "56",
"firstname": "Mark",
"middlename": "Patrick",
"lastname": "Aspir",
"age": "25",
"usertype": "member",
"email": "[email protected]"
}]
rows.pop(0)
)? \$\endgroup\$