Problem statement:
Link |
Time: 1s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2025-04-16
Difficulty:
Disjoint Set UnionLine SweepGeometry
I originally set N to 1000, but after seeing some O(N
2) solutions get accepted, which I didn't expect, I have increased N to 5000. We should use a more efficient approach to solve the problem, shouldn't we?
Approach 1: Line sweep and set union(Solution Link)
The first step is to identify connected components by merging overlapping villages. Instead of performing pairwise testing, we sort the villages by x-coordinates and maintain an active set of villages that could potentially overlap. It is done by processing through events based on their x
1 (start) and x
2 (end). When a village from the active set and the current village overlap along the y-axis, we merge them using disjoint set union (DSU). By leveraging a binary search tree to maintain the active set in descending order based on their y
1, this allows us to efficiently query which villages in the set have y
1 < y
2 of the current village in O(log N), narrowing down the search space. You may note that in the worst-case time complexity, it is O(N
2) when all villages overlap significantly, forcing pairwise interactions at every step. However, typically it is O(N log N) because most test cases are generated randomly. Once the villages have been divided into connected components, each represented by a rectilinear polygon where the interior angle at each vertex is 90 degrees, we proceed to connect them.
One key observation is that we can simplify these rectilinear polygons into bounding boxes. If two bounding boxes overlap along any axis, there exists a valid straight-line path between the original polygons. Even if two boudning boxes, A and B, overlap along both axises, and another bounding box C overlaps along a single axis, potentially altering the spatial relationship and suggesting an incorrect path A → C, instead of A → B, the final solution remains correct because we are not calculating the shortest distance. For example, if A, B and C both overlap along the x-axis, a vertical line can connect all of them. Thus, both (A → C, B → C) and (A → B, B → C) yield the minimum number of line segments. We only need to ensure that two connected bounding boxes are not redundantly connected again, which can be handled by DSU. By leveraging line sweep and DSU again, we can sort the bounding boxes by x-coordinates and sweep from left to right. If the x
1 of the current box < the max(x
2) of previous boxes, they can be connected by a vertical line, provided they were not previously connected. The same approach applies for the y-coordinates: sweeping along the y-axis, if the y
1 of the current box < the max(y
2) of previous boxes, they can be connected by a horizontal line, if not already connected. Finally, we determine which villages can be connected by a single line segment, as they are already grouped by DSU. For the remaining (number of connected bounding boxes - 1) components, it requires 2 line segments to connect each of them. Because of sorting, this part runs in O(N * log N).
Problem statement:
Link |
Time: 0.5s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2022-02-12
Difficulty:
Ad-hocBasic MathGeometry
Welcome to my first interactive problem! The constraints are not tight, we can actually guess the answer in just three tries.
Approach 1: Elementary algebra (Solution Link)
In this problem, 10
9 by 10
9 unit is the size of a quadrant. To make our answer relative to origin (0, 0), we should start from two rightmost coordinates (10
9, 10
9) and (10
9, -10
9). Let
D1 and
D2 be the manhattan distances from these coordinates to our target
T, where D1 + D2 ≥ 10
9 * 2. By calculating D1 + D2 - 10
9 * 2, we know the x-axis distance from the rightmost boundary to the x-coordinate of T, then we get the x-coordinate of T by subtracting that value from 10
9. Hence, X = 10
9 - (D1 + D2 - 10
9 * 2) / 2 = 10
9 * 2 - (D1 + D2) / 2. Once we know X, we can substitute X into D1 = |10
9 - X| + |10
9 - Y|, thus Y = 10
9 * 2 - X - D1. We are able to guess T in two guesses, the last guess is for receiving a verdict. This is a constant time solution running in O(1).
Approach 2: Line-line intersection (Solution Link)
Same as above, we will be guessing the upper right corner and lower right corner to obtain
D1 and
D2. Every coordinate that is D1 units away from (10
9, 10
9) is a potential target. Similarily, every coordinate that is D2 units away from (10
9, -10
9) is a potential target. Since in this problem we are calculating manhattan distance, those potential coordinates originated from D1 and D2 will form two straight lines respectively. The intersection point of these two lines is the target. We can simply pick any two points on each of the two lines and find the intersection. Now it becomes a famous problem with a well kwown solution using determinants (read
Line-line intersection page on Wikipedia). Although this solution requires a slightly heavy implementation and computation, the time complexity is still O(1).
Problem statement:
Link |
Time: 0.8s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2020-05-14
Difficulty:
Dynamic ProgrammingCombinatoricsStringPrefix & SuffixSubsequence
By drawing a tree diagram for every possible combination with a dummy root, we will find there are many overlapping subpaths. The performance can be improved by bottom-up dynamic programming.
Approach 1: Dynamic Programming (Solution Link)
For a particular i-th word, let
f[j] and
f'[j] be the number of possible combinations of making S[0 ... j] using the prefixes from W[0] to W[i - 1], and from W[0] to W[i], respectively. For W[0], f[1 ... k] = 1 where k is the length of longest common prefix with S. Now consider the following transition states for W[i] where i > 0:
- For conjuction or adposition, f' remains unchanged
If W[i] is conjuction or adposition, we can ignore it, hence f'[j] = f[j], otherwise f'[j] = 0.
- Combine W[i]'s prefixes with S's prefixes
Let PW and PS be the length of current prefix of W[i] and S respectively, where 1 ≤ PW ≤ min(|W[i]|, |S|) and PW < PS ≤ |S|. We want to satisfy two conditions: First, W[i][0 ... PW - 1] is a suffix of S[0 ... PS - 1]. Second, S[0 ... PS - PW - 1] is contiguous considering all preceding words, that is, f[PS - PW - 1] > 0. If all good, f'[PS] = f'[PS] + f[PS - PW - 1]. Note that if f'[j] = 0 for every j, the result is no longer contiguous so we can immediately return 0.
- Swap f[j] with f'[j]
Make use of f'[j] to compute the next f'[j], so f[j] = f'[j]
Our final answer is f[|S|]. Everything is bounded by |S|, so the time complexity is O(N * |S|
2). Another way to think the problem: for each word, it forms a set with |W[i]| elements, containing all prefixes of W[i] (also empty string if it is conjunction or adposition), where its value is the prefix length. We have N sets and a bag with |S| capacity, we want to select one element from each set, such that the bag value is exactly |S| while satisfying the constraint that the selected element must be part of the suffix of current S. This is a variant of multiple-choice knapsack problem.
Problem statement:
Link |
Time: 0.4s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2019-11-02
Difficulty:
Dynamic ProgrammingShortest Path
We can view the shortest travel time as the shortest weighted path. Then this problem is clearly all-pairs constrained shortest path problem, the true way to go is to use a variant of Floyd-Warshall algorithm.
Approach 1: Run Floyd-Warshall algorithm K times (Solution Link)
Given a directed graph
G = (V, E) with weight W[i] for E[i] and the second constraint L[i] for V[i], where V donates cities, E donates flights and W donates travel time. Note that one of the constraints in this problem is that the amount of incoming accumulate flow for each directly connected pair of nodes is restricted, thus the following shortest path property is not necessarily true: A subpath of a shortest path is a shortest path, hence the resulting path(u, v) doesn't always be path(u, k) + path(k, v). Instead we will find the shortest path by doing K relexations for each vertex with a condition check on L[i].
Define two distance matrices
D' as the results of length K - 1 and
D as the results of length K. We will be finding the shortest path for every (U, V) with length from 1 to K incrementally. In each relaxation, for each (U, V), we treat U[i] as an intermediate vertex and test whether the shortest travel time from a particular city
T to V[i] can be improved by passing through path(T, U[i], V[i]) by satisfying the constraint D'[T][U[i]] ≤ L(U[i], V[i]). Hence, if there is a path from T to V[i], D[T][V[i]] = min(D'[T][U[i]] + W[i]). The next relaxation needs to make use of the current results, thus D = D'. This solution requires K relaxations for each (U, V), hence can be run in O(K * |E| * |V|), around 6002500 iterations in the worse case.
Problem statement:
Link |
Time: 0.5s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2019-05-29
Difficulty:
GraphGreedyHeapMatchingBinary SearchDepth-first Search
The requirements of this problem are similar to MACHCOOL, except for the duration of tasks and the number of machines. We have many ways to solve this problem.
We are only interested in max(D[i][j]) for a particular i-th task because T[i] must wait for the preceding task to complete before it can proceed. Sort T by start time and end time ascendingly. Let
M be the minimum number of machines we need. We want one machine to deal with as many tasks as possible, so we must assign T[i] to a machine as early as possible once its start time ≥ the end time of the latest task assigned to that machine. Otherwise, we need a new machine to hold that task. For T[i], we need to get the earliest end time among T[0] to T[i - 1] using min heap. If the start time of T[i] ≥ heap's root, it means T[i] can be held by existing machines, then we continue to perform the check in the heap until T[i] has a conflict with it. Finally, the remaining elements of the heap are the end time of tasks which we need a new machine to handle. Thus, M is the heap size.
To distribute them optimally, the idea is similar to the greedy approach mentioned in MACHCOOL. We consecutively distribute M machines as early as possible by pushing the first M tasks into an empty min heap. For the remaining N - M tasks, our goal is to maximize the gap between the start time of T[i] and the end time of the latest task assigned to any machine. Note that the heap's root is already the earliest end time of the task already assigned, hence the answer is min(start time of T[i] - heap's root) for each remaining task. The overall time complexity is O(N * logN).
Approach 2: Line sweep with binary search (Solution Link)
Again, we need to know
M, which is the minimum number of machines needed to handle all tasks. We use the same greedy insight with line sweep algorithm. Donate
f(C) as M with C cooling time for each machine. This time We are going to flatten T into two pairs (start time, 1) and (start time + max(D[i][j]) + C, -1) and sort them ascendingly. By loop through N * 2 records, we can recognize if it is the start time or end time by the second element of the pair. If the value is 1, we need a new machine to hold that task, otherwise an existing machine is released and can be used by another task. Hence f(C) = max(∑
i=0jrecord[i]'s second element), where 0 ≤ j < N * 2, and f(0) gives us the minimum of machines we need. Let's do binary search on C. Let
L and
R be the search range, where L = 0 and R = 86400. For each median
MID, if M ≥ f(MID), it is a valid answer because M machines are enough to handle all the tasks with MID cooling time, then we can look for a larger guess. Otherwise, we need to decrease the cooling time. This solution has the same time complexity as before.
Approach 3: Maximum bipartite matching (Solution Link)
This problem can be modelled as a graph problem. Let
G = (V, E) be a directed acyclic graph (DAG), where V are tasks and there is an edge E from V[i] to V[i + 1] if and only if V[i]'s end time ≤ V[i + 1]'s start time. From the DAG, every node on any path can be distributed to a machine. If we want to find the minimum number of machines we need, we have to minimize the number of paths such that no two paths share an endpoint, which is equivalent to the minimum edge cover problem in a bipartite graph. Let
A = V and
B = V be two disjoint and independent sets, edges connecting from A to B are corresponding to E. This will form a bipartite graph. Finding the maximum matching in bipartite graph is a well known problem which can be solved in O(|V|
3). The number of unmatched points in A is the number of machines we need. Once we know the maximum matching, we can also get the minimum number of machines we need because maximum independent set = minimum edge cover = |V| - maximum matching in bipartite graph.
To get the maximum cooling time, we are going to use binary search. We know that the cooling time affects how G looks like. If the cooling time is longer, |E| and matchings become less, hence the number of machines increases, vice versa. This solution has worse performance though.
Problem statement:
Link |
Time: 0.3s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2019-05-19
Difficulty:
GreedyBinary SearchAd-hoc
There is a straightforward solution with greedy insight.
Approach 1: Always choose i + M machine (Solution Link)
Let's sort tasks
T by start time in an ascending order. To optimize the distribution of M machines, we should not put the same machine on consecutive tasks. Consider this situation: Let |T| = 4 and M = 2. If we assign the first machine to {T[0], T[1]}, the second the machine to {T[2]}, this answer is then T[1] - T[0]. If only {T[1]} is assigned to the second machine, the answer is T[2] - T[0]. Note that the tasks are sorted by the start time, hence T[2] ≥ T[1] ≥ T[0]. If we add one more task T[3], obviously min(T[2] - T[0], T[3] - T[1]) ≥ min(T[1] - T[0], T[3] - T[2]). This is also true for arbitrary N > 0. Therefore, the best approach is to make the number of tasks assigned to each machine and its interval evenly distributed. It helps avoid the result from being pulled down by a significantly small interval of some other machines. That's why we better assign T[i] to the i modulo M machine individually, and the answer is min(86400, T[i + M] - T[i]), where i + M < N. We reach the solution in O(N * logN).
Approach 2: Binary search (Solution Link)
This idea is still the same as before based on sorted
T, but we will be using binary search. Let
L = 0 and
R = 86400 be the search range. During each search, the median
MID will be our guess. Looping through T to see whether T[i + M] - T[i] ≥ MID. If so, it means the maximum cooling time ≥ MID, so we can assume MID is our answer and increase L for a larger answer. Otherwise, MID is not a valid answer so we have to reduce the range by reducing R. The time complexity of this approach is still (N * logN).
Problem statement:
Link |
Time: 0.6s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2018-11-08
Difficulty:
Dynamic ProgrammingQueueStack
This problem can be solved by dynamic programming in O(N). Since we can only move right, we have to collect all the gems sequentially.
Approach 1: Dynamic programming (Solution Link)
Let
T and
G be the positions of teleporters and gems, respectively. The maximum number of moves is G[|G| - 1] as we can only move one unit at a time. For the minimum number of moves, it depends on the optimal results in the last unit or previous three teleporters. Donate
f[p] as the minimum number of moves from 0 to S[p]. Initially, p[0] = 0 because we are already at position 0, and for p ≥ 1 in the worst case, f[p] = f[p - 1] + 1 = p. For each position, our first step is to check whether S[p] belongs to "@" (teleporter) or "*" (gem).
Let
X and
Y point to the current index of T and G respectively. If S[p] = "*", we simply collect the gem and increase Y by 1. If S[p] = "@", we have to update the minimum number of moves to reach the next 3 teleporters. So, let's update f[T[X] + i], where 1 ≤ i ≤ 3 and X + i < |T|. For each i, we may consider either being transferred from the current teleporter T[X] or not being transferred. Therefore, f[T[X] + i] = min(f[T[X] + i], f[p] + 3). The final answer is f[G[|G| - 1]]. Since the maximum number of teleporters being transferred to is 3, we can also use double-ended queue (Deque) or three pointers to solve this problem based on the same idea of dynamic programming.
Problem statement:
Link |
Time: 0.7s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2018-10-25
Difficulty:
Dynamic ProgrammingDepth-first SearchBitmaskingSimulationImplementation
Trying all possibilities costs O(2
N * N!), instead we will use different techniques to speed up the solution.
Approach 1: Depth first search with bitmasking and memoization (Solution Link)
Flattening all M * K tiles, we will be simulating putting different flatted tiles
T into gap
G. Define
fit(p, T[i]) as whether T[i] can fit in G if we put T[i] at G[p]. A quick check for failure case is that |T[i]| + p > |G|. Otherwise, we can check it by rotating T[i] clockwise or counterclockwise by 90 degrees. Instead of trying all possible placments, we can achieve the same goal by comparing the turn of two adjacent units of both T[i] and G. The turn function returns the difference between two turns by encoding a direction into a number {'L': 0, 'D': 1, 'R': 2, 'U': 3}. To avoid result underflow and overflow, plus 4 and modulo 4 are taken, where 4 is the number of possible turns. The result will always be either 0 (go straight), 1 (turn left 90 degrees) or 3 (turn right 90 degrees), and never be 2 (turn 180 degrees) because it means two units are overlapped. From G's head, we compare turn(T[i][j], T[i][j + 1]) with turn(G[p + j], G[p + j + 1]), where 1 ≤ j < |T[i]| - 1. Similary, from G's tail, we compare turn(T[i][j], T[i][j - 1]) with turn(G[p + |T[i]| - j], G[p + |T[i]| - j + 1]), where 1 > j ≥ |T[i]| - 1. Note that we ignore the first unit of T[i] as it will always fit in G regardless of rotation. Therefore, fit(p, T[i]) is true if turn(T) = turn(G) or turn(T') = turn(G').
So what is the proper subset of T and permutation to fit in the entire G? Let's try to generate those subsets and permutations by depth first search (DFS) and bitmasking. Every bit represents a particular T[i]. Define
f(mask, p) as whether the subset of T excluded from mask can be permuted to fit in G[p]. An unset i-th bit means T[i] is unused, we can check if it can fit in G[p] by invoking fit(p, T[i]). If so, we will discard it by setting the i-th bit and recursively check the next available tile by f(mask | i-th bit, p + |T[i]|). If not, the current permutation is no longer valid so we can stop exploring the remaining tiles. One observation is that this process includes a lot of overlapping computation because most of those masks being passed is duplicate. By memoizing the results of a particular mask using three states: {-1 (unsolved), 0 (unfit), 1 (fit)}, where cache[mask] = 1 when p = |G|, it improves execution time. Every submask is a subproblem of mask, so when cache[mask] = X, cache[any submask of mask] = X, where X ⊂ {0, 1}.
With the help of bitmasking and memoization, plus some pruning, we discard all invalid permutations as early as possible to make the recursion tree shorter and speed up our solution. This is also called bitmask dynamic programming.
Problem statement:
Link |
Time: 0.4s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2018-07-26
Difficulty:
Dynamic Programming
Instead of an O(6
N) naive solution which is to try every combination, an O(N) dynamic programming solution will be described here.
Approach 1: Dynamic programming (Solution Link)
For a particular i-th gift box, consider all 6 possible rotations
R[j][i] for 3 unduplicated surfaces: {W, D}, {H, D}, {W, H}, where 0 ≤ j < 6. Let
f[i][j] be the current maximum visible area we can obtain from gift[0] to gift[i] in which we place gift[i] as R[i][j]. Consider the following transition states for all j:
-
When i = 0, it is just first gif box's total surface area excluding the bottom:
Hence, f[0][j] = R[0][j].W * R[0][j].D * 2 + R[0][j].H * R[0][j].D * 2 + R[0][j].W * R[0][j].H
-
When i ≥ 1, combining the R[i] with f[i - 1], resulting in 36 combinations:
Note that one side of {H, D} touches gift[i - 1], there are three cases when we place R[i][j] centrally horizontally behind R[i - 1][j]:
- H[i] ≤ H[i - 1] and D[i] ≤ D[i - 1]: The total area of {H, D} is not affected.
- H[i] > H[i - 1] and D[i] > D[i - 1]: It completely wraps R[i - 1][j].
- H[i] > H[i - 1] or D[i] > D[i - 1], not both: It partially wraps R[i - 1][j] horizontally or vertically.
Hence, f[i][j] = max(f[i - 1][j] + R[i][j].W * R[i][j].D * 2 + R[i][j].W * R[i][j].H + extra area from case 2, 3)
To reduce the space complexity, since we just need f[i - 1][j] to compute f[i][j], there is no need to store all states prior to i - 1. The answer is max(f[N - 1][j]). The overall time complexity becomes O(36 * (N - 1) + 6) = O(N).
Problem statement:
Link |
Time: 1s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2018-04-13
Difficulty:
TreeDepth-first Search
Note that in a binary tree, the base of an isosceles triangle is either on the left, right or bottom, and at most two of them can be formed by a node at the same time.
Approach 1: Depth first search (Solution Link)
We treat a particular node[i] as a vertex of two edges with equal length and check which two types of triangle can be formed by counting
DCL[i] and
DCR[i]: the depth from node[i] to the leftmost leaf and rightmost leaf respectively, as well as
DPL[i] and
DPR[i]: the depth from node[i] to the highest ancestor where the side between node[i] and its parent and the side between a particular node and its parent differ. Donate
dfs(i, DPL[i], DPR[i]) as a post-order DFS funtion which returns {DC
L[i], DC
R[i]} and update the answer on the fly. When node[i] is a leaf, it has no child so the function returns {1, 1}, otherwise, as in post-order traversal we start from the leftmost leaf, its results are always larger than its child's by 1. DP[i] being passed to the function does not affect the results, but it does affect the answer. A node only has at most one parent, meaning DP
L[i] and DP
R[i] cannot co-exist and one of them must be 0. As for root, it has no parent hence we start from dfs(0, 0, 0). Then we have DC
L[i] = dfs(node[i]'s left child, DP
L[node[i]'s parent] + 1, 0)[0] + 1 and DC
R[i] = dfs(node[i]'s right child, 0, DP
R[node[i]'s parent] + 1)[1] + 1.
Once we know both DP and DC, the maximum area of the triangle is then determined by min(length of two sides), which can be splitted into equal number of triangles. The answer of node[i] is min(DC
L[i], DC
R[i]) + min(DC
L[i], DP
R[i]) + min(DC
R[i], DP
L[i]), and we add all nodes' answer up during tree traversal in O(N).
An article about this problem published on GeeksforGeeks in 2022, there are diagrams to help you understand.
Problem statement:
Link |
Time: 0.45s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2018-04-11
Difficulty:
Prefix SumBasic MathBinary SearchSimulationTwo pointersAd-hoc
Approach 1: Prefix sum and two pointers (Solution Link)
Using prefix sum
SUM, we can calculate the accumulative amount of ordors
ACCUM at i-th second, where ACCUM[0] = O[0] and ACCUM[i] = ACCUM[i - 1] + SUM[i]. Once ACCUM[i] ≥ M. we know at either i or i + 1 second, the absolute difference between the current amount of odors and the tolerance will be minimized. In case M > ACCUM[N - 1], we can stand ACCUM[N - 1] in N seconds, plus the extra amount of odors divided by SUM[N - 1].
For the second part, we can use two pointers to find out the longest portion of consecutive available cubicles by maintaining the leftmost and rightmost index of current portion. The optimal position is within the longest portion so far. Since the chosen position should be as middle in that portion as possible and as close to the entrance as possible, if there are multiple portions with the same length, we can determine the best position based on the distance from i to the entrance. In case S[i] = 'x' we can move to next cubicle because that cubicle is already occupied. It produces an O(|S|
2) solution.
Approach 2: Binary search and two pointers (Solution Link)
ACCUM is strictly increasing, so we can use binary search. Let
L and
R be the search range. We know that we need at least 1 second to wait for the release of odors, hence L = 1. Besides, we can stand all the amount of odors in the first N seconds, plus extra amount of odors divided by SUM[N - 1], hence R = N + max(M - ACCUM[N - 1], 0) / SUM[N - 1] + 1. For every median
MID, we will be calculating the accumulative amount of odors at MID second in O(N) by ∑
i=0min(N - 1, MID - 1)O[i] * (MID - i). It the result < M, it means at MID seconds, the accumulative amount of odors is less than M, and we can stand more so we increase L. If the result > M, it is not a valid answer so we narrow down R. It produces an O(N * logR) solution.
As for the second part of the problem, we use the same approach: Two pointers, so please refer to the second part of approach 1.
Problem statement:
Link |
Time: 0.6s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2018-03-29
Difficulty:
GraphBreadth-first SearchShortest Path
Only moving from '.' to '.' will increase no cost, otherwise the cost is increased by 1. It becomes a single-source weighted shortest path problem in a grid, where the distance is either 0 or 1, hence we can use breadth first search (BFS).
Approach 1: Breadth first search (Solution Link)
Given a grid graph
G, let's traverse G from the source S horizontally or vertically using BFS in O(N * M). Define
C[i][j] as the minimum cost of exploring from S to G[i][j]. Initially, C[S
i][S
j] = 0 and for all other C[i][j] = -1, meaning that we have not visited any cells yet except the source. On each move to adjacent valid cells, consider the following conditions:
-
Cost is still the same: C[adj_i][adj_j] = C[i][j]
Moving from tunnel to tunnel costs nothing. Update the cost if and only if we have not visited G[adj_i][adj_j] or C[i][j] is cheaper.
-
Cost is increased: C[adj_i][adj_j] = C[i][j] + 1
Entering or leaving tunnel does cost. Update the cost if and only if we have not visited visited G[adj_i][adj_j] or C[i][j] + 1 is cheaper.
Once we know the cost matrix, the answer is simply printing something based on the matrix.
Problem statement:
Link |
Time: 0.04s per test file |
Source: 5000B |
Memory: 1536MB |
Languages: ALL
Added on 2018-03-27
Difficulty:
GreedyDivide & ConquerPrefix SumBrute Force
An O(N
2) solution is simple but inefficient, it can be improved by a divide and conquer approach.
Approach 1: Range sum for each dimension (Solution Link)
Considering each dimension X, Y, Z independently. Let
X' be sorted X, where X'[i] is a pair (sorted X[i], the original index in X). For a particular i-th point, we calculate the sum of distances from the westernmost point to all other points to the west of X'[i] using prefix sum
SUM. Note that the results include overlapping distances. For those points to the east of X'[i], (X'[i] - X'[0]) * (N - 1 - i) are included wrongly. For those points to the west of X'[i], SUM[i] are also included wrongly. When calculating the sum of distances from X'[i] to all other points, we need to eliminate these overlapping results.
Back to X[i], since we know the original index of X'[i] from the second element of X'[i], we are able to update the answer at the corresponding index of our resulting array
DIST. Repeat the same procedure for Y and Z dimension, and those coordinates that minimize DIST
X[i] + DIST
Y[i] + DIST
Z[i] are the answer. The time complexity of this solution is O(N * logN).