Day 25: Snowverload
Megathread guidelines
- Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
- You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL
FAQ
- What is this?: Here is a post with a large amount of details: https://programming.dev/post/6637268
- Where do I participate?: https://adventofcode.com/
- Is there a leaderboard for the community?: We have a programming.dev leaderboard with the info on how to join in this post: https://programming.dev/post/6631465
Scala3
all done!
def parseLine(a: String): List[UnDiEdge[String]] = a match case s"$n: $r" => r.split(" ").map(_ ~ n).toList case _ => List() def removeShortestPath(g: Graph[String, UnDiEdge[String]], ns: List[String]) = g.removedAll(g.get(ns(0)).shortestPathTo(g.get(ns(1))).map(_.edges.map(_.outer)).getOrElse(List())) def removeTriple(g: Graph[String, UnDiEdge[String]], ns: List[String]) = List.fill(3)(ns).foldLeft(g)(removeShortestPath) def division(g: Graph[String, UnDiEdge[String]]): Option[Long] = val t = g.componentTraverser() Option.when(t.size == 2)(t.map(_.nodes.size).product) def task1(a: List[String]): Long = val g = Graph() ++ a.flatMap(parseLine) g.nodes.toList.combinations(2).map(a => removeTriple(g, a.map(_.outer))).flatMap(division).take(1).toList.head
Impressive!
Python (Not C…)
(Posting from an alt because SDF is having issues)
A graph problem 😬. There are well-known algorithms but none trivial to implement in C.
I tried dividing the nodes into two groups and then moving the node with the worst in-group/out-group edge balance - a Wish version of the Kernighan-Lin algorithm. As expected, it got stuck in a local optimum for the real input and random prodding didn’t help.
Programming is programming and libraries are fair game so I went to Python and used NetworkX which was my first time using the library but so easy to use! Maybe I’ll go back and do it in C someday and implement the algorithm myself.
#!/usr/bin/env python3 import sys import re from networkx import Graph from networkx.algorithms.community import greedy_modularity_communities G = Graph() for line in sys.stdin.readlines(): words = re.findall("\\w+", line) for to in words[1:]: G.add_edge(words[0], to) coms = greedy_modularity_communities(G, best_n=2) print("25:", len(coms[0]) * len(coms[1]))
https://github.com/sjmulder/aoc/tree/master/2023/python/day25.py
Rust
First tried a really slow brute force, but after waiting many minutes heard others talk of Karger’s Algorithm, so I implemented that.
use rand::prelude::*; use std::collections::HashSet; type Graph = (V, E); type V = HashSet<String>; type E = Vec<(String, String)>; fn parse_input(input: &str) -> Graph { let mut vertices = HashSet::new(); let mut edges = Vec::new(); for line in input.trim().lines() { let mut it = line.split(':'); let left = it.next().unwrap(); vertices.insert(left.to_owned()); for right in it.next().unwrap().trim().split_whitespace() { vertices.insert(right.to_owned()); edges.push((left.to_owned(), right.to_owned())); } } (vertices, edges) } fn part1(input: &str) -> usize { let (vertices, edges) = parse_input(input); let mut rng = rand::thread_rng(); // Karger's Algorithm loop { let mut vertices = vertices.clone(); let mut edges = edges.clone(); while vertices.len() > 2 { let i = rng.gen_range(0..edges.len()); let (v1, v2) = edges[i].clone(); // contract the edge edges.swap_remove(i); vertices.remove(&v1); vertices.remove(&v2); let new_v = format!("{}:{}", v1, v2); vertices.insert(new_v.clone()); for (e1, e2) in edges.iter_mut() { if *e1 == v1 || *e1 == v2 { *e1 = new_v.clone() } if *e2 == v1 || *e2 == v2 { *e2 = new_v.clone() } } // remove loops let mut j = 0; while j < edges.len() { let (e1, e2) = &edges[j]; if e1 == e2 { edges.swap_remove(j); } else { j += 1; } } } if edges.len() == 3 { break vertices .iter() .map(|s| s.split(':').count()) .product::<usize>(); } } }
Same. I am happy that it was a straightforward algorithm.