Set cover algorithms tend to provide just one solution for finding a minimum number of sets to cover. How to go about finding all such solutions?
It depends on what you mean by "minimal" as this will vary the number of set covers you get. For example if you had the target set of ABC and the sets AB,AC,C to choose from you could cover with either (AB,C) or (AB,AC) or all 3 (AB,AC,C) and if you're defining "minimal" as for example the 2 lowest choices i.e. with the fewest overlaps or fewest repeated elements then you would choose the 1st 2 ((AB,C) and (AB,AC)). "Minimal" could also be defined in terms number of sets chosen so for the above example the lowest number would be 2 and either (AB,C) or (AB,AC) would work. But if you wanted all possible set covers you could start with just brute force going through each combo so
import java.util.*;
public class f {
// abcd
static int target = 0b1111;
// ab,ac,acd,cd
static int[] groups = {0b1100,0b1010,0b1011,0b0011};
// check if sets cover target
// for example 1100 and 0011 would cover 1111
static boolean covers(boolean[] A){
int or = 0;
for(int i=0;i<A.length;i++){
if(A[i]) or = or | groups[i];
}
int t = target;
while (t>0){
if(t%2!=1 || or%2!=1)
return false;
t = t>>1;
or = or>>1;
}
return true;
}
// go through all choices
static void combos(boolean A[],int i,int j){
if(i>j){
if(covers(A)) System.out.println(Arrays.toString(A));
return;
}
combos(A,i+1,j);
A[i]=!A[i];
combos(A,i+1,j);
}
public static void main(String args[]){
boolean[] A = new boolean[groups.length];
combos(A,0,groups.length-1);
}
}
Related
I want to create a function to determine the most number of pieces of paper on a parent paper size
The formula above is still not optimal. If using the above formula will only produce at most 32 cut/sheet.
I want it like below.
This seems to be a very difficult problem to solve optimally. See http://lagrange.ime.usp.br/~lobato/packing/ for a discussion of a 2008 paper claiming that the problem is believed (but not proven) to be NP-hard. The researchers found some approximation algorithms and implemented them on that website.
The following solution uses Top-Down Dynamic Programming to find optimal solutions to this problem. I am providing this solution in C#, which shouldn't be too hard to convert into the language of your choice (or whatever style of pseudocode you prefer). I have tested this solution on your specific example and it completes in less than a second (I'm not sure how much less than a second).
It should be noted that this solution assumes that only guillotine cuts are allowed. This is a common restriction for real-world 2D Stock-Cutting applications and it greatly simplifies the solution complexity. However, CS, Math and other programming problems often allow all types of cutting, so in that case this solution would not necessarily find the optimal solution (but it would still provide a better heuristic answer than your current formula).
First, we need a value-structure to represent the size of the starting stock, the desired rectangle(s) and of the pieces cut from the stock (this needs to be a value-type because it will be used as the key to our memoization cache and other collections, and we need to to compare the actual values rather than an object reference address):
public struct Vector2D
{
public int X;
public int Y;
public Vector2D(int x, int y)
{
X = x;
Y = y;
}
}
Here is the main method to be called. Note that all values need to be in integers, for the specific case above this just means multiplying everything by 100. These methods here require integers, but are otherwise are scale-invariant so multiplying by 100 or 1000 or whatever won't affect performance (just make sure that the values don't overflow an int).
public int SolveMaxCount1R(Vector2D Parent, Vector2D Item)
{
// make a list to hold both the item size and its rotation
List<Vector2D> itemSizes = new List<Vector2D>();
itemSizes.Add(Item);
if (Item.X != Item.Y)
{
itemSizes.Add(new Vector2D(Item.Y, Item.X));
}
int solution = SolveGeneralMaxCount(Parent, itemSizes.ToArray());
return solution;
}
Here is an example of how you would call this method with your parameter values. In this case I have assumed that all of the solution methods are part of a class called SolverClass:
SolverClass solver = new SolverClass();
int count = solver.SolveMaxCount1R(new Vector2D(2500, 3800), new Vector2D(425, 550));
//(all units are in tenths of a millimeter to make everything integers)
The main method calls a general solver method for this type of problem (that is not restricted to just one size rectangle and its rotation):
public int SolveGeneralMaxCount(Vector2D Parent, Vector2D[] ItemSizes)
{
// determine the maximum x and y scaling factors using GCDs (Greastest
// Common Divisor)
List<int> xValues = new List<int>();
List<int> yValues = new List<int>();
foreach (Vector2D size in ItemSizes)
{
xValues.Add(size.X);
yValues.Add(size.Y);
}
xValues.Add(Parent.X);
yValues.Add(Parent.Y);
int xScale = NaturalNumbers.GCD(xValues);
int yScale = NaturalNumbers.GCD(yValues);
// rescale our parameters
Vector2D parent = new Vector2D(Parent.X / xScale, Parent.Y / yScale);
var baseShapes = new Dictionary<Vector2D, Vector2D>();
foreach (var size in ItemSizes)
{
var reducedSize = new Vector2D(size.X / xScale, size.Y / yScale);
baseShapes.Add(reducedSize, reducedSize);
}
//determine the minimum values that an allowed item shape can fit into
_xMin = int.MaxValue;
_yMin = int.MaxValue;
foreach (var size in baseShapes.Keys)
{
if (size.X < _xMin) _xMin = size.X;
if (size.Y < _yMin) _yMin = size.Y;
}
// create the memoization cache for shapes
Dictionary<Vector2D, SizeCount> shapesCache = new Dictionary<Vector2D, SizeCount>();
// find the solution pattern with the most finished items
int best = solveGMC(shapesCache, baseShapes, parent);
return best;
}
private int _xMin;
private int _yMin;
The general solution method calls a recursive worker method that does most of the actual work.
private int solveGMC(
Dictionary<Vector2D, SizeCount> shapeCache,
Dictionary<Vector2D, Vector2D> baseShapes,
Vector2D sheet )
{
// have we already solved this size?
if (shapeCache.ContainsKey(sheet)) return shapeCache[sheet].ItemCount;
SizeCount item = new SizeCount(sheet, 0);
if ((sheet.X < _xMin) || (sheet.Y < _yMin))
{
// if it's too small in either dimension then this is a scrap piece
item.ItemCount = 0;
}
else // try every way of cutting this sheet (guillotine cuts only)
{
int child0;
int child1;
// try every size of horizontal guillotine cut
for (int c = sheet.X / 2; c > 0; c--)
{
child0 = solveGMC(shapeCache, baseShapes, new Vector2D(c, sheet.Y));
child1 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X - c, sheet.Y));
if (child0 + child1 > item.ItemCount)
{
item.ItemCount = child0 + child1;
}
}
// try every size of vertical guillotine cut
for (int c = sheet.Y / 2; c > 0; c--)
{
child0 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X, c));
child1 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X, sheet.Y - c));
if (child0 + child1 > item.ItemCount)
{
item.ItemCount = child0 + child1;
}
}
// if no children returned finished items, then the sheet is
// either scrap or a finished item itself
if (item.ItemCount == 0)
{
if (baseShapes.ContainsKey(item.Size))
{
item.ItemCount = 1;
}
else
{
item.ItemCount = 0;
}
}
}
// add the item to the cache before we return it
shapeCache.Add(item.Size, item);
return item.ItemCount;
}
Finally, the general solution method uses a GCD function to rescale the dimensions to achieve scale-invariance. This is implemented in a static class called NaturalNumbers. I have included the rlevant parts of this class below:
static class NaturalNumbers
{
/// <summary>
/// Returns the Greatest Common Divisor of two natural numbers.
/// Returns Zero if either number is Zero,
/// Returns One if either number is One and both numbers are >Zero
/// </summary>
public static int GCD(int a, int b)
{
if ((a == 0) || (b == 0)) return 0;
if (a >= b)
return gcd_(a, b);
else
return gcd_(b, a);
}
/// <summary>
/// Returns the Greatest Common Divisor of a list of natural numbers.
/// (Note: will run fastest if the list is in ascending order)
/// </summary>
public static int GCD(IEnumerable<int> numbers)
{
// parameter checks
if (numbers == null || numbers.Count() == 0) return 0;
int first = numbers.First();
if (first <= 1) return 0;
int g = (int)first;
if (g <= 1) return g;
int i = 0;
foreach (int n in numbers)
{
if (i == 0)
g = n;
else
g = GCD(n, g);
if (g <= 1) return g;
i++;
}
return g;
}
// Euclidian method with Euclidian Division,
// From: https://en.wikipedia.org/wiki/Euclidean_algorithm
private static int gcd_(int a, int b)
{
while (b != 0)
{
int t = b;
b = (a % b);
a = t;
}
return a;
}
}
Please let me know of any problems or questions you might have with this solution.
Oops, forgot that I was also using this class:
public class SizeCount
{
public Vector2D Size;
public int ItemCount;
public SizeCount(Vector2D itemSize, int itemCount)
{
Size = itemSize;
ItemCount = itemCount;
}
}
As I mentioned in the comments, it would actually be pretty easy to factor this class out of the code, but it's still in there right now.
I have the following set of integers {2,9,4,1,8}. I need to divide this set into two subsets so that the sum of the sets results in 14 and 10 respectively. In my example the answer is {2,4,8} and {9,1}. I am not looking for any code. I am pretty sure there must be a standard algorithm to solve this problem. Since i was not successful in googling and finding out that myself, i posted my query here. So what will be the best way to approach this problem?
My try was like this...
public class Test {
public static void main(String[] args) {
int[] input = {2, 9, 4, 1, 8};
int target = 14;
Stack<Integer> stack = new Stack<>();
for (int i = 0; i < input.length; i++) {
stack.add(input[i]);
for (int j = i+1;j<input.length;j++) {
int sum = sumInStack(stack);
if (sum < target) {
stack.add(input[j]);
continue;
}
if (target == sum) {
System.out.println("Eureka");
}
stack.remove(input[i]);
}
}
}
private static int sumInStack(Stack<Integer> stack) {
int sum = 0;
for (Integer integer : stack) {
sum+=integer;
}
return sum;
}
}
I know this approach is not even close to solve the problem
I need to divide this set into two subsets so that the sum of the sets results in 14 and 10 respectively.
If the subsets have to sum to certain values, then it had better be true that the sum of the entire set is the sum of those values, i.e. 14+10=24 in your example. If you only have to find the two subsets, then the problem isn't very difficult — find any subset that sums to one of those values, and the remaining elements of the set must sum to the other value.
For the example set you gave, {2,9,4,1,8}, you said that the answer is {9,1}, {2,4,8}, but notice that that's not the only answer; there's also {2,8}, {9,4,1}.
For example, string "AAABBB" will have permutations:
"ABAABB",
"BBAABA",
"ABABAB",
etc
What's a good algorithm for generating the permutations? (And what's its time complexity?)
For a multiset, you can solve recursively by position (JavaScript code):
function f(multiset,counters,result){
if (counters.every(x => x === 0)){
console.log(result);
return;
}
for (var i=0; i<counters.length; i++){
if (counters[i] > 0){
_counters = counters.slice();
_counters[i]--;
f(multiset,_counters,result + multiset[i]);
}
}
}
f(['A','B'],[3,3],'');
This is not full answer, just an idea.
If your strings has fixed number of only two letters I'll go with binary tree and good recursion function.
Each node is object that contains name with prefix of parent name and suffix A or B furthermore it have numbers of A and B letters in the name.
Node constructor gets name of parent and number of A and B from parent so it needs only to add 1 to number of A or B and one letter to name.
It doesn't construct next node if there is more than three A (in case of A node) or B respectively, or their sum is equal to the length of starting string.
Now you can collect leafs of 2 trees (their names) and have all permutations that you need.
Scala or some functional language (with object-like features) would be perfect for implementing this algorithm. Hope this helps or just sparks some ideas.
Since you actually want to generate the permutations instead of just counting them, the best complexity you can hope for is O(size_of_output).
Here's a good solution in java that meets that bound and runs very quickly, while consuming negligible space. It first sorts the letters to find the lexographically smallest permutation, and then generates all permutations in lexographic order.
It's known as the Pandita algorithm: https://en.wikipedia.org/wiki/Permutation#Generation_in_lexicographic_order
import java.util.Arrays;
import java.util.function.Consumer;
public class UniquePermutations
{
static void generateUniquePermutations(String s, Consumer<String> consumer)
{
char[] array = s.toCharArray();
Arrays.sort(array);
for (;;)
{
consumer.accept(String.valueOf(array));
int changePos=array.length-2;
while (changePos>=0 && array[changePos]>=array[changePos+1])
--changePos;
if (changePos<0)
break; //all done
int swapPos=changePos+1;
while(swapPos+1 < array.length && array[swapPos+1]>array[changePos])
++swapPos;
char t = array[changePos];
array[changePos] = array[swapPos];
array[swapPos] = t;
for (int i=changePos+1, j = array.length-1; i < j; ++i,--j)
{
t = array[i];
array[i] = array[j];
array[j] = t;
}
}
}
public static void main (String[] args) throws java.lang.Exception
{
StringBuilder line = new StringBuilder();
generateUniquePermutations("banana", s->{
if (line.length() > 0)
{
if (line.length() + s.length() >= 75)
{
System.out.println(line.toString());
line.setLength(0);
}
else
line.append(" ");
}
line.append(s);
});
System.out.println(line);
}
}
Here is the output:
aaabnn aaanbn aaannb aabann aabnan aabnna aanabn aananb aanban aanbna
aannab aannba abaann abanan abanna abnaan abnana abnnaa anaabn anaanb
anaban anabna ananab ananba anbaan anbana anbnaa annaab annaba annbaa
baaann baanan baanna banaan banana bannaa bnaaan bnaana bnanaa bnnaaa
naaabn naaanb naaban naabna naanab naanba nabaan nabana nabnaa nanaab
nanaba nanbaa nbaaan nbaana nbanaa nbnaaa nnaaab nnaaba nnabaa nnbaaa
Alright so i was implementing a solution of a problem which started of by giving you a (n,n) grid. It required me to to start at (1,1), visit certain points in the grid, marked as * and then finally proceed to (n,n). The size of the grid is guaranteed to be not more then 15 and the number of points to visit , * is guaranteed to be >=0 and <=n-2. The start and end points are always empty. There are certain obstacles , # where I cannot step on. Also, if i have visited a point before reaching a certain *, i can go through it again after collecting *.
Here is what my solution does. I made a datastructure called 'Node' which has 2 integer datatypes (x,y). It's basically a tuple.
class Node
{
int x,y;
Node(int x1,int y1)
{
x=x1;
y=y1;
}
}
While taking in the grid, i maintain a Set which stores the coordinates of '*' in the grid.
Set<Node> points=new HashSet<Node>();
I maintain a grid array and also a distance array
char [][]
int distances [][]
Now what i do is, i apply BFS as (1,1) as source. As soon as i encounter any '*' ( Which i believe will be the closest because BFS provides us with the shortest path in an unweighted graph ), I remove it from the Set.
Now i apply BFS again where my source becomes the last coordinate of '*' found. Everytime, i refresh the distance array since my source coordinate has changed. For the grid array, i refresh the paths marked as 'V' (visited) for the previous iteration.
This entire process continues until i reach the last '*'.
BTW if my BFS returns -1, the program prints '-1' and quits.
Now if I have successfully reached all '' in the shortest possible way(i guess?), i set the (n,n) coordinate in the Grid as '' and apply BFS one last time. This way i get to the final point.
Now my solution seemd to be failing somewhere. Have gone wrong somewhere? Is my concept wrong? Does this 'greedy' approach fail? Getting the shortest path between all '*' checkpoints should eventually get me the shortest path IMO.
I looked around and saw this this problem is similar to the Travelling Salesman problem and also solvable by Dynamic Programming and DFS mix or A* algorithm.I have no clue how though. Someone even said dijkstra between each * but according to my knowledge, in an unweighted graph, Dijktra and BFS work the same. I just want to know why this BFS solution fails
Finally, Here is my code:
import java.io.*;
import java.util.*;
/**
* Created by Shreyans on 5/2/2015 at 2:29 PM using IntelliJ IDEA (Fast IO Template)
*/
//ADD PUBLIC FOR CF,TC
class Node
{
int x,y;
Node(int x1,int y1)
{
x=x1;
y=y1;
}
}
class N1
{
//Datastructures and Datatypes used
static char grid[][];
static int distances[][];
static int r=0,c=0,s1=0,s2=0,f1=0,f2=0;
static int dx[]={1,-1,0,0};
static int dy[]={0,0,-1,1};
static Set<Node> points=new HashSet<Node>();
static int flag=1;
public static void main(String[] args) throws IOException
{
Scanner sc=new Scanner(System.in);
int t=sc.nextInt();//testcases
for(int ixx=0;ixx<t;ixx++)
{
flag=1;
r=sc.nextInt();
if(r==1)
{
sc.next();//Taking in '.' basically
System.out.println("0");//Already there
continue;
}
c=r;//Rows guarenteed to be same as rows. It a nxn grid
grid=new char[r][c];
distances=new int[r][c];
points.clear();
for(int i=0;i<r;i++)
{
char[]x1=sc.next().toCharArray();
for(int j=0;j<c;j++)
{
grid[i][j]=x1[j];
if(x1[j]=='*')
{
points.add(new Node(i,j));
}
}
}//built grid
s1=s2=0;
distances[s1][s2]=0;//for 0,0
int ansd=0;
while(!points.isEmpty())
{
for(int i=0;i<r;i++)
{
for (int j = 0; j < c; j++)
{
distances[i][j]=0;
if(grid[i][j]=='V')//Visited
{
grid[i][j]='.';
}
}
}
distances[s1][s2]=0;
int dis=BFS();
if(dis!=-1)
{
ansd += dis;//Adding on (minimum?) distaces
//System.out.println("CURR DIS: "+ansd);
}
else
{
System.out.println("-1");
flag = 0;
break;
}
}
if(flag==1)
{
for(int i11=0;i11<r;i11++)
{
for(int j1=0;j1<c;j1++)
{
if(grid[i11][j1]=='V')//These pnts become accesible in the next iteration again
{
grid[i11][j1]='.';
}
distances[i11][j1]=0;
}
}
f1=r-1;f2=c-1;
grid[f1][f2]='*';
int x=BFS();
if(x!=-1)
{
System.out.println((ansd+x));//Final distance
}
else
{
System.out.println("-1");//Not possible
}
}
}
}
public static int BFS()
{
// Printing current grid correctly according to concept
System.out.println("SOURCE IS:"+(s1+1)+","+(s2+1));
for(int i2=0;i2<r;i2++)
{
for (int j1 = 0; j1 < c; j1++)
{
{
System.out.print(grid[i2][j1]);
}
}
System.out.println();
}
Queue<Node>q=new LinkedList<Node>();
q.add(new Node(s1,s2));
while(!q.isEmpty())
{
Node p=q.poll();
for(int i=0;i<4;i++)
{
if(((p.x+dx[i]>=0)&&(p.x+dx[i]<r))&&((p.y+dy[i]>=0)&&(p.y+dy[i]<c))&&(grid[p.x+dx[i]][p.y+dy[i]]!='#'))
{//If point is in range
int cx,cy;
cx=p.x+dx[i];
cy=p.y+dy[i];
distances[cx][cy]=distances[p.x][p.y]+1;//Distances
if(grid[cx][cy]=='*')//destination
{
for(Node rm:points)// finding the node and removing it
{
if(rm.x==cx&&rm.y==cy)
{
points.remove(rm);
break;
}
}
grid[cx][cy]='.';//It i walkable again
s1=cx;s2=cy;//next source set
return distances[cx][cy];
}
else if(grid[cx][cy]=='.')//Normal tile. Now setting to visited
{
grid[cx][cy]='V';//Adding to visited
q.add(new Node(cx,cy));
}
}
}
}
return -1;
}
}
Here is my code in action for a few testcases. Gives the correct answer:
JAVA: http://ideone.com/qoE859
C++ : http://ideone.com/gsCSSL
Here is where my code fails: http://www.codechef.com/status/N1,bholagabbar
Your idea is wrong. I haven't read the code because what you describe will fail even if implemented perfectly.
Consider something like this:
x....
.....
..***
....*
*...*
You will traverse the maze like this:
x....
.....
..123
....4
*...5
Then go from 5 to the bottom-left * and back to 5, taking 16 steps. This however:
x....
.....
..234
....5
1...6
Takes 12 steps.
The correct solution to the problem involves brute force. Generate all permutations of the * positions, visit them in the order given by the permutation and take the minimum.
13! is rather large though, so this might not be fast enough. There is a faster solution by dynamic programming in O(2^k), similar to the Travelling Salesman Dynamic Programming Solution (also here).
I don't have time to talk about the solution much right now. If you have questions about it, feel free to ask another question and I'm sure someone will chime in (or leave this one open).
This was asked in an interview
"What is the most efficient way to implement a shuffle function in a music
player to play random songs without repetition"
I suggested link-list approach i.e. use a link-list, generate a random number and remove that item/song from the list ( this way , we ensure that no song is repeated )
then I suggested bit vector approach but he wasn't satisfied at all.
so what according to you is the best approach to implement such a function?
Below are some implementations. I also had difficulties during the interview but after the interview I saw that the solution is simple.
public class MusicTrackProgram {
// O(n) in-place swapping
public static List<MusicTrack> shuffle3(List<MusicTrack> input) {
Random random = new Random();
int last = input.size() - 1;
while (last >= 0) {
int randomInt = Math.abs(random.nextInt() % input.size());
// O(1)
MusicTrack randomTrack = input.get(randomInt);
MusicTrack temp = input.get(last);
// O(1)
input.set(last, randomTrack);
input.set(randomInt, temp);
--last;
}
return input;
}
// O(n) but extra field
public static List<MusicTrack> shuffle(List<MusicTrack> input) {
List<MusicTrack> result = new ArrayList<>();
Random random = new Random();
while (result.size() != input.size()) {
int randomInt = Math.abs(random.nextInt() % input.size());
// O(1)
MusicTrack randomTrack = input.get(randomInt);
if (randomTrack.isUsed) {
continue;
}
// O(1)
result.add(randomTrack);
randomTrack.isUsed = true;
}
return result;
}
// very inefficient O(n^2)
public static List<MusicTrack> shuffle2(List<MusicTrack> input) {
List<MusicTrack> result = new ArrayList<>();
Random random = new Random();
while (result.size() != input.size()) {
int randomInt = Math.abs(random.nextInt() % input.size());
// O(1)
MusicTrack randomTrack = input.get(randomInt);
// O(1)
result.add(randomTrack);
// O(n)
input.remove(randomTrack);
}
return result;
}
public static void main(String[] args) {
List<MusicTrack> musicTracks = MusicTrackFactory.generate(1000000);
List<MusicTrack> result = shuffle3(musicTracks);
result.stream().forEach(x -> System.out.println(x.getName()));
}
}
There is no perfect answer, I guess this sort of questions is aimed to start a discussion. Most likely your interviewer was wanting to hear about Fisher–Yates shuffle (aka Knuth shuffle).
Here is brief outline from wiki:
Write down the numbers from 1 through N.
Pick a random number k between one and the number of unstruck numbers remaining (inclusive).
Counting from the low end, strike out the kth number not yet struck out, and write it down elsewhere.
Repeat from step 2 until all the numbers have been struck out.
The sequence of numbers written down in step 3 is now a random permutation of the original numbers.
You should mention its inefficiencies and benefits, how you could improve this, throw in a few lines of code and discuss what and how you would test this code.
We can use link list and a queue for implementing a song search in mp3 player
We can extend this to following functionalities:
Add a new song
Delete a song
Randomly play a song
Add a song in play queue
Suppose initially we have 6 songs stored as link list
Link list has 2 pointers : start and end
totalSongCount=6
Randomly play a song:
We will generate a random number between 1 to totalSongCount. Let this be 4
We will remove the node representing song 4 and keep it after end pointer
we will decerement the totalSongCount (totalSongCount--).
Next time random number will be generated between 1 to 5 as we have decremented the totalSongCount , we can repeat the process
To add a new song, just add it to link list and make it as head pointer(add in beginning)
increment totalSongCount (totalSongCount++)
To delete a song , first find it and delete it
Also keep a track whether it is after end pointer , if it is not just decerement the totalSongCount (totalSongCount--)
The selected song can have two option:
Either play at that moment or
Add to a playlist (Seperate queue)
I think below solution should work
class InvalidInput extends Exception{
public InvalidInput(String str){
super(str);
}
}
class SongShuffler{
String songName[];
int cooldownPeriod;
Queue<String> queue;
int lastIndex ;
Random random;
public SongShuffler(String arr[], int k) throws InvalidInput{
if(arr.length < k)
throw new InvalidInput("Arr length should be greater than k");
songName = arr;
cooldownPeriod = k;
queue = new LinkedList<String>();
lastIndex = arr.length-1;
random = new Random();
}
public String getSong(){
if(queue.size() == cooldownPeriod){
String s = queue.poll();
songName[lastIndex+1] = s;
lastIndex++;
}
int ind = random.nextInt(lastIndex);
String ans = songName[ind];
queue.add(ans);
songName[ind] = songName[lastIndex];
lastIndex--;
return ans;
}
}