T O P
v_maria

chatGTP code be like


MurdoMaclachlan

*Image Transcription: Code* --- def read_csv_file(file_name): data = [] # Read text file file = open(file_name, "r") line = file.readline() while line: item = line.replace("]", "").replace("[", "").split(" ") Time = float(item[0]) Px = float(item[1]) Py = float(item[2]) Pz = float(item[3]) C1 = float(item[4]) C2 = float(item[5]) C3 = float(item[6]) array_point = np.array([Time, Px, Py, Pz, C1, C2, C3]) data.append(array_point) line = file.readline() file.close() return data --- ^^I'm a human volunteer content transcriber and you could be too! [If you'd like more information on what we do and why we do it, click here!](https://www.reddit.com/r/TranscribersOfReddit/wiki/index)


ALapsedPacifist

Shorter, and more Pythonic: def read_csv_file(file_name): data = [] with open(file_name, 'r') as file: for line in file: items = line.replace(']', '').replace('[', '').split(' ') array_point = np.array(float(x) for x in items) data.append(array_point) return data


CallMeAnanda

I would probably do this with pandas read_csv. You even get to keep the column labels


ALapsedPacifist

I haven't had cause to use pandas much, so I don't really know what's in it. And anyway, OP clearly needs to be made aware that text files are iterable by line.


Historica97

Or better, instead of iterating over each line : def read_csv_file(file_name): data = [] with open(file_name, "r") as file: lines = file.readlines() for line in lines: items = line.replace("]", "").replace("[", "").rstrip("\n").split(" ") array_point = np.array(float(x) for x in items) data.append(array_point) return data


ALapsedPacifist

def read_csv_file(file_name): with open(file_name, 'r') as file: line_items = file.read().replace(']', '').replace('[', '').split('\n') return [np.array(float(x) for item in li) for li in line_items]


Lexus4tw

Isn’t polars recommended lately since it’s faster?


Historica97

Yes, pandas is actually the way to go. Without the brackets, we would have something like this : import pandas as pd df = pd.read_csv( filename, columns=["time", "px", "py", "pz", "c1", "c2", "c3"], sep=" ", )


Suhas44

What’s bad about this code?


k0rvbert

A bunch of stuff, but most egregious is the lack of a context for the open file (i.e. using \`with open(path) as file\`) and the uncanny do-while-like construction instead of just \`for line in file\` Also if you're already using numpy you might as well try np.loadtxt or np.genfromtxt. Lets you set a delimiter for the space and a converter for the \]\[. ... and I was gonna say masking builtin name \`file\` but turns out that's been gone since python3. Well, that's a relief!


raam86

the file is explicitly closed, not using context is the least of my worries here


k0rvbert

Well, sure, but you'll want a context manager in case of an exception. Consider a ValueError raised from the float cast due to bad input data -- the file would remain open, but if you use the \`open\` context manager it will close the file before raising the exception.


raam86

that’s good to know


v_maria

Does not make good use of the language


Sanduhr32

CSV with spaces as delimiter is weird for example.


Historica97

* The lack of a context manager to open the file * Like u/korvbert said, the `do-while-like` construction is not the most clear to understand * The CSV clearly doesn't contain any column names, which makes it hard to understand the different types of data * Using `pandas` would have been easier and eventually more efficient than a homemade function


XordK

Love the comment `# Read text file` followed by `file = open()` and `line = file.readline()`


j0hn_mc_clane

What program did you use to create the image if I may ask?


TheArturro

I'd love to know too.


j0hn_mc_clane

I found this https://carbon.now.sh/


TheArturro

That's awesome!


Historica97

That's exactly what I used !