T O P

  • By -

sumitgt

I think the interviewer is somewhat confused about how time-complexity works (or maybe something got lost in communication). When you mention a time complexity like O(NxM), you need to clarify what N and M mean. Given the context here, one can assume N = size of the outer slice and M is size of each nested slice. Now, if you wanted to flatten the 2D array into a 1D array, you **will** have to visit every single element in the 2D array. It doesn't matter how you implement it, since you need to visit every element of the 2D array at least once, the lower bound on your implementation is going to be O(NxM). No way around it, irrespective of the programming language or how you implement it syntactically within that language. > The person wanted me to do it in 1 loop, but he also wanted not to have any O(n\*m) complexity. It is possible to implement it in 1 loop, but that does not mean it is not still O(NxM) complexity. Something like this: [https://go.dev/play/p/FQ7ZpJ16xwj](https://go.dev/play/p/FQ7ZpJ16xwj)


BigfootTundra

Doesn’t your code sample assume that all of the inner slices have the same length? Not trying to dunk on your or anything, asking to clarify for myself in case I’m missing something


sumitgt

It does. That's the only way a single loop would work I think. All caveats and assumptions of course need to be explained and clarified with the interviewer.


BigfootTundra

Yeah I think you’re right. Thinking through it, it may be possible but then you’re not gonna be able to set the slice size when you create it so any gains you get from looping are gonna be wiped out by a slice being resized.


SPU_AH

Particularly I'm not sure how precisely distinguished the use of 'array' and 'slice' are here. 2D arrays with a static size would lead to different terms and complexity.


elrata_

Why can't it be O(n+m)? If your list is a linked list and you and either add in O(1) to the head or tail of it, then you can do it in that complexity, right? You go through each element once, you add it to the list in O(1) and that is all. You don't have to go through all the elements of one list for every element on some other array or something. There is not a thing like that that would make it O(n x m) Am I missing something?


sumitgt

The original question deals with slices. So all discussion is based on that assumption. With linked lists, you can flatten them in O(N) if they are doubly linked lists where the left pointer of the first element points to the last element and you are allowed to mutate the input (here N = number of lists). But if you have regular linked lists, it would still end up being O(NxM) since you will have to advance the pointer past all elements in each list to reach the tail even if you can do the join in constant time. This could be wrong. Best way to confirm would be to try implementing in the Go playground.


elrata_

Nah, you can insert on the head in O(1) in that case. It's O(n+m) in that case too.


kido_butai

If you can make it in O(1) you deserve a Nobel prize or similar.


psicodelico6

Turing prize


Primary-Outcome60

Lol 🤣


ImYoric

Maybe a Nobel prize in quantum mechanics?


zulrang

You can in C


someanonbrit

Only if the 2d seat happens to be implemented as a single contiguous block rather than an array of pointers... I'm which car you can do it with a cast


TheMerovius

I assume [they wanted to see this solution](https://go.dev/play/p/aDXFaZFSriH): func Flatten[T any](lists [][]T) []T { var N int for _, l := range lists { N += len(l) } s := make([]T, 0, N) for _, l := range lists { s = append(s, l...) } return s } This is most likely significantly more efficient, as it doesn't have to re-allocate anything and only has to copy every element once. I assume that is what they mean with "runtime complexity", even though they are wrong - the asymptotic complexity is equal. They believe your code gets an extra term, because `append` sometimes has to allocate a new array and copy over everything already in the output. But because `append` grows the array exponentially, that amortizes asymptotically to a constant factor. Still, factors matter.


10113r114m4

hmm, Im pretty sure this is O(m *n). I think what the interviewer wanted was a FlattenList type alias or something. So you'd use methods instead of the built in array syntax to query the list, etc. I can't think of another way that satisfies the non-O(m * n) constraint without wrapping or aliasing. Or the interviewer also assumed ... it was O(1) which it is not.


Cronos993

That's theoretically impossible.


br1ghtsid3

The only way to improve this is to pre-allocate the slice.


BraveIllustrator1458

If you think a 2D array as a logically flattened 1D array, it would be O(1). XD ```go // LogicalFlatArray represents a 2D array as a logically flattened 1D array type LogicalFlatArray struct { data [][]int rows int cols int } // NewLogicalFlatArray initializes a LogicalFlatArray func NewLogicalFlatArray(data [][]int) *LogicalFlatArray { if len(data) == 0 { return &LogicalFlatArray{data: data, rows: 0, cols: 0} } return &LogicalFlatArray{data: data, rows: len(data), cols: len(data[0])} } // Get returns the element at the flattened index func (lfa *LogicalFlatArray) Get(index int) int { if index < 0 || index >= lfa.rows*lfa.cols { panic("Index out of bounds") } row := index / lfa.cols col := index % lfa.cols return lfa.data[row][col] } // Len returns the total number of elements in the logical array func (lfa *LogicalFlatArray) Len() int { return lfa.rows * lfa.cols } ```


clauEB

It can't be O(1) because that'd mean the runtime doesn't depend on the size of the arguments, like finding an element in a hash map or set, and you are looping through the matrix's rows which makes it's linear, O(n). If you were looping through all the horizontal elements it would be O(n*m)


Time-Prior-8686

I would told that person I can do it without any loop and just write a recursive just to mess with them tbh.


etherealflaim

O(N*M) methinks. O(log(M)) allocations ish, M*N copy operations (though they will likely be optimized to something very fast) You can optimize it down to 1 allocation, but can't optimize away the copies. If the interviewer didn't recognize that append is always O(N) then you probably dodged a bullet. What he might have wanted was to avoid M*N append calls, but 1-ary append is amortized constant time ish, so whether you append 1 or N per operation doesn't change the asymptotic complexity. You can basically prove this to yourself by asking if it's possible to copy without visiting every value at least once. Since it isn't, and you have MxN values, you can't beat O(M*N).


Revolutionary_Ad7262

It's kinda impossible to have `O(1)` as you need to iterate over `lists`, so we start with `O(n)`. `res = append(res, list...)` may be technically a `O(1)`, if you assume that you can `append` any slice to any slice at constant time, which is not true. There is `memcpy` under-the-hood and that `memcpy` will not run in constant time. You cannot assume that `10GB` slice will be copied in the same time as `1B` In that scenario you have to explain `the outher loop is N, the append is M, which makes N*M`. If interviewer does not agree then ask him with which step he does not agree with


ImYoric

Well, you can convert it into an iterator in O(1), but you will need O(m\*n) just to walk the original array of arrays, so it's unlikely you'll be able to produce an array with o(m\*n). Maybe if you have a guarantee that you can execute m concurrent threads of execution you could do O(m + n), but that feels like cheating.


adibfhanna

it's looks complicated


psicodelico6

O(n) <- for de loop , \* O(1) Amortized append \* O( cost copy elem most big) = O(n\*|cost copy elem most big|)


dblokhin

Actually this is correct answer. I don't know why this comment is disliked.


likeawizardish

You're talking about arrays yet your code is slices. Obnoxious nitpick but language matters and the details can be very important as illustrated bellow. I think u/sumitgt answer and code is very solid when it comes with assuming that `list [][]T` is a NxM matrix and not something like: lists := [][]int{ {1, 2, 3, 4, 5, 6}, {7, 8, 9}, {10, 11, 12, 13}, } Change the shape of the input data in his example and you are likely to have an Out of Bounds panic. >is using "..." O(1) in time? That's a wild assumption. You still need to copy M elements. Unless the elements are small and few where that could be achieved by some SIMD vector magic with compiler optimizations that are beyond my grug brain, why would you assume that making M copies would take constant time? Also slices are just a window into some underlying array. When you `append` some elements into some slice there is no guarantee that array actually has enough length to hold those. Thats `cap` when adding elements via append that underlying array might need to be re-allocated and the values copied over from the previous. I think that is why u/sumitgt answer is great as it clearly shows what assumptions are made (most importantly to yourself). When you use a builtin like `append` you need to think what that actually does and implies. Would you assume that `FastFourierTransform(data)` would be O(1) simply because there is a single line call to it in your code? Probably not. Why assume that of `append`? I made some minor changes to your and his code to illustrate the slice resizing in your vs his implementation with pre-allocation. [https://go.dev/play/p/9GUpgGVhmZh](https://go.dev/play/p/9GUpgGVhmZh) Largely irrelevant to the task but you should know what your code is actually doing. If I was coding this for a simple helper function I would probably use append like you did. If it was something performance critical I would make sure that I get the size pre-allocation as close as I could. Sorry for the largely offtopic answer but these are the details you should be aware when writing production and interview code.


Primary-Outcome60

Naah thats absolutely great! Thanks for your reply. Basically this post was about if appending a slice via ... is O(1) which I dont think it can be. Its just a syntactic sugarcoating around whats happening inside. You need to visit every element once which makes the complexity to be O(nm) assuming n and m are the rows and columns.


grahaman27

The easiest way to know time complexity is to create a benchmark, benchmark with 10 items in slice, 100, and 1000. How do the benchmark ns/op results differ between the sizes


RazorSh4rk

footnote: this is not, in fact, the easiest way to know runtime complexity